I0203 21:08:40.917713 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0203 21:08:40.918368 8 e2e.go:109] Starting e2e run "7e13b875-caae-4cce-b5dd-eaf74de2dd59" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580764119 - Will randomize all specs Will run 278 of 4814 specs Feb 3 21:08:41.020: INFO: >>> kubeConfig: /root/.kube/config Feb 3 21:08:41.027: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 3 21:08:41.060: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 21:08:41.108: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 21:08:41.108: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 21:08:41.108: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 3 21:08:41.124: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 3 21:08:41.124: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 3 21:08:41.124: INFO: e2e test version: v1.17.0 Feb 3 21:08:41.125: INFO: kube-apiserver version: v1.17.0 Feb 3 21:08:41.125: INFO: >>> kubeConfig: /root/.kube/config Feb 3 21:08:41.132: INFO: Cluster IP family: ipv4 S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:08:41.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Feb 3 21:08:41.197: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 3 21:08:41.205: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:02.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6692" for this suite. • [SLOW TEST:21.226 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:02.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Feb 3 21:09:02.506: INFO: Waiting up to 5m0s for pod "downward-api-f36567b9-9f9f-4692-9f4e-525321017006" in namespace "downward-api-9698" to be "success or failure" Feb 3 21:09:02.549: INFO: Pod "downward-api-f36567b9-9f9f-4692-9f4e-525321017006": Phase="Pending", Reason="", readiness=false. Elapsed: 43.070753ms Feb 3 21:09:04.558: INFO: Pod "downward-api-f36567b9-9f9f-4692-9f4e-525321017006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051425709s Feb 3 21:09:06.609: INFO: Pod "downward-api-f36567b9-9f9f-4692-9f4e-525321017006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10276194s Feb 3 21:09:08.619: INFO: Pod "downward-api-f36567b9-9f9f-4692-9f4e-525321017006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11267705s Feb 3 21:09:10.679: INFO: Pod "downward-api-f36567b9-9f9f-4692-9f4e-525321017006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.173050805s STEP: Saw pod success Feb 3 21:09:10.679: INFO: Pod "downward-api-f36567b9-9f9f-4692-9f4e-525321017006" satisfied condition "success or failure" Feb 3 21:09:10.684: INFO: Trying to get logs from node jerma-node pod downward-api-f36567b9-9f9f-4692-9f4e-525321017006 container dapi-container: STEP: delete the pod Feb 3 21:09:10.719: INFO: Waiting for pod downward-api-f36567b9-9f9f-4692-9f4e-525321017006 to disappear Feb 3 21:09:10.725: INFO: Pod downward-api-f36567b9-9f9f-4692-9f4e-525321017006 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:10.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9698" for this suite. • [SLOW TEST:8.379 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":13,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:10.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 3 21:09:10.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4909' Feb 3 21:09:12.796: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 3 21:09:12.796: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Feb 3 21:09:14.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4909' Feb 3 21:09:15.162: INFO: stderr: "" Feb 3 21:09:15.162: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:15.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4909" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":3,"skipped":18,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:15.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:09:15.358: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 3 21:09:15.398: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 3 21:09:20.406: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 3 21:09:24.419: INFO: Creating deployment "test-rolling-update-deployment" Feb 3 21:09:24.427: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 3 21:09:24.501: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 3 21:09:26.522: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 3 21:09:26.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:09:28.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:09:30.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716360964, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:09:32.538: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 3 21:09:32.560: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8268 /apis/apps/v1/namespaces/deployment-8268/deployments/test-rolling-update-deployment debc8046-c672-460f-8776-d64b846776a2 6194899 1 2020-02-03 21:09:24 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002823fc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-03 21:09:24 +0000 UTC,LastTransitionTime:2020-02-03 21:09:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-03 21:09:31 +0000 UTC,LastTransitionTime:2020-02-03 21:09:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 3 21:09:32.581: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-8268 /apis/apps/v1/namespaces/deployment-8268/replicasets/test-rolling-update-deployment-67cf4f6444 6e364086-1862-4dbf-8dfb-4b5b35abf0c7 6194888 1 2020-02-03 21:09:24 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment debc8046-c672-460f-8776-d64b846776a2 0xc002916397 0xc002916398}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002916408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:09:32.581: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 3 21:09:32.581: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8268 /apis/apps/v1/namespaces/deployment-8268/replicasets/test-rolling-update-controller 3b2d48cc-17a5-4186-aef7-4ded1b4f80a5 6194898 2 2020-02-03 21:09:15 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment debc8046-c672-460f-8776-d64b846776a2 0xc0029162c7 0xc0029162c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002916328 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:09:32.612: INFO: Pod "test-rolling-update-deployment-67cf4f6444-jlzt8" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-jlzt8 test-rolling-update-deployment-67cf4f6444- deployment-8268 /api/v1/namespaces/deployment-8268/pods/test-rolling-update-deployment-67cf4f6444-jlzt8 5a022f3e-7bb7-42db-a467-43cc4c70e8d5 6194887 0 2020-02-03 21:09:24 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 6e364086-1862-4dbf-8dfb-4b5b35abf0c7 0xc002916857 0xc002916858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hmt8l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hmt8l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hmt8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:09:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:09:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:09:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:09:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-03 21:09:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 21:09:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://fe6a70722f7ecb6c9cf9b7358238311206ecfdc4ee8b1f0af543394bc7aa7933,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:32.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8268" for this suite. • [SLOW TEST:17.444 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":4,"skipped":26,"failed":0} [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:32.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:33.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8931" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":5,"skipped":26,"failed":0} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:33.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Feb 3 21:09:45.843: INFO: Successfully updated pod "adopt-release-lg555" STEP: Checking that the Job readopts the Pod Feb 3 21:09:45.843: INFO: Waiting up to 15m0s for pod "adopt-release-lg555" in namespace "job-9327" to be "adopted" Feb 3 21:09:45.854: INFO: Pod "adopt-release-lg555": Phase="Running", Reason="", readiness=true. Elapsed: 10.690205ms Feb 3 21:09:47.869: INFO: Pod "adopt-release-lg555": Phase="Running", Reason="", readiness=true. Elapsed: 2.025912906s Feb 3 21:09:47.869: INFO: Pod "adopt-release-lg555" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Feb 3 21:09:48.384: INFO: Successfully updated pod "adopt-release-lg555" STEP: Checking that the Job releases the Pod Feb 3 21:09:48.385: INFO: Waiting up to 15m0s for pod "adopt-release-lg555" in namespace "job-9327" to be "released" Feb 3 21:09:48.430: INFO: Pod "adopt-release-lg555": Phase="Running", Reason="", readiness=true. Elapsed: 45.691359ms Feb 3 21:09:50.439: INFO: Pod "adopt-release-lg555": Phase="Running", Reason="", readiness=true. Elapsed: 2.054033939s Feb 3 21:09:50.439: INFO: Pod "adopt-release-lg555" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:50.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9327" for this suite. • [SLOW TEST:17.334 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":6,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:50.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 21:09:58.791: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:58.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4913" for this suite. • [SLOW TEST:8.413 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:58.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-89e17c93-f267-4335-b651-bc9b60eda319 STEP: Creating configMap with name cm-test-opt-upd-9980f348-9493-48f9-9a65-e5de9ba97cb0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-89e17c93-f267-4335-b651-bc9b60eda319 STEP: Updating configmap cm-test-opt-upd-9980f348-9493-48f9-9a65-e5de9ba97cb0 STEP: Creating configMap with name cm-test-opt-create-6228721c-8f3b-4431-a255-93c120d1279f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:11:42.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7772" for this suite. • [SLOW TEST:103.995 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":93,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:11:42.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 3 21:11:42.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1649' Feb 3 21:11:43.091: INFO: stderr: "" Feb 3 21:11:43.091: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 Feb 3 21:11:43.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1649' Feb 3 21:11:48.633: INFO: stderr: "" Feb 3 21:11:48.633: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:11:48.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1649" for this suite. • [SLOW TEST:5.790 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":9,"skipped":99,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:11:48.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:11:55.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8403" for this suite. STEP: Destroying namespace "nsdeletetest-4806" for this suite. Feb 3 21:11:55.381: INFO: Namespace nsdeletetest-4806 was already deleted STEP: Destroying namespace "nsdeletetest-2849" for this suite. • [SLOW TEST:6.730 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":10,"skipped":105,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:11:55.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:11:56.273: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:11:58.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:12:00.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:12:02.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361116, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:12:05.330: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:12:15.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2785" for this suite. STEP: Destroying namespace "webhook-2785-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.276 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":11,"skipped":119,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:12:15.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Feb 3 21:12:15.767: INFO: namespace kubectl-3224 Feb 3 21:12:15.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3224' Feb 3 21:12:16.195: INFO: stderr: "" Feb 3 21:12:16.195: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 3 21:12:17.200: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:12:17.200: INFO: Found 0 / 1 Feb 3 21:12:18.201: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:12:18.201: INFO: Found 0 / 1 Feb 3 21:12:19.202: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:12:19.202: INFO: Found 0 / 1 Feb 3 21:12:20.205: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:12:20.205: INFO: Found 0 / 1 Feb 3 21:12:21.214: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:12:21.214: INFO: Found 0 / 1 Feb 3 21:12:22.199: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:12:22.199: INFO: Found 0 / 1 Feb 3 21:12:23.199: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:12:23.199: INFO: Found 0 / 1 Feb 3 21:12:24.206: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:12:24.206: INFO: Found 1 / 1 Feb 3 21:12:24.206: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 3 21:12:24.210: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:12:24.211: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 3 21:12:24.211: INFO: wait on agnhost-master startup in kubectl-3224 Feb 3 21:12:24.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-rlw6k agnhost-master --namespace=kubectl-3224' Feb 3 21:12:24.383: INFO: stderr: "" Feb 3 21:12:24.383: INFO: stdout: "Paused\n" STEP: exposing RC Feb 3 21:12:24.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3224' Feb 3 21:12:24.668: INFO: stderr: "" Feb 3 21:12:24.668: INFO: stdout: "service/rm2 exposed\n" Feb 3 21:12:24.708: INFO: Service rm2 in namespace kubectl-3224 found. STEP: exposing service Feb 3 21:12:26.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3224' Feb 3 21:12:26.954: INFO: stderr: "" Feb 3 21:12:26.954: INFO: stdout: "service/rm3 exposed\n" Feb 3 21:12:26.961: INFO: Service rm3 in namespace kubectl-3224 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:12:28.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3224" for this suite. • [SLOW TEST:13.326 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":12,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:12:28.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:12:29.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f" in namespace "projected-1829" to be "success or failure" Feb 3 21:12:29.172: INFO: Pod "downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.60465ms Feb 3 21:12:31.178: INFO: Pod "downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013971939s Feb 3 21:12:33.181: INFO: Pod "downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017892456s Feb 3 21:12:35.189: INFO: Pod "downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025893849s Feb 3 21:12:37.199: INFO: Pod "downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035473067s STEP: Saw pod success Feb 3 21:12:37.199: INFO: Pod "downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f" satisfied condition "success or failure" Feb 3 21:12:37.205: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f container client-container: STEP: delete the pod Feb 3 21:12:37.274: INFO: Waiting for pod downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f to disappear Feb 3 21:12:37.282: INFO: Pod downwardapi-volume-65c078eb-cbb8-403e-8932-16be00a63a2f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:12:37.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1829" for this suite. • [SLOW TEST:8.317 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:12:37.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Feb 3 21:12:37.478: INFO: Waiting up to 5m0s for pod "var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c" in namespace "var-expansion-8751" to be "success or failure" Feb 3 21:12:37.490: INFO: Pod "var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.695485ms Feb 3 21:12:39.498: INFO: Pod "var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019937459s Feb 3 21:12:41.505: INFO: Pod "var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026646518s Feb 3 21:12:43.528: INFO: Pod "var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049410009s Feb 3 21:12:45.542: INFO: Pod "var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064266432s STEP: Saw pod success Feb 3 21:12:45.543: INFO: Pod "var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c" satisfied condition "success or failure" Feb 3 21:12:45.548: INFO: Trying to get logs from node jerma-node pod var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c container dapi-container: STEP: delete the pod Feb 3 21:12:45.585: INFO: Waiting for pod var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c to disappear Feb 3 21:12:45.602: INFO: Pod var-expansion-ad1db00d-6983-465e-9fa7-30eb6f9eff1c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:12:45.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8751" for this suite. • [SLOW TEST:8.306 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:12:45.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:12:53.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9942" for this suite. • [SLOW TEST:8.337 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:12:53.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 21:13:02.168: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:02.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8767" for this suite. • [SLOW TEST:8.330 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":236,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:02.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:18.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1623" for this suite. • [SLOW TEST:16.576 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":17,"skipped":241,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:18.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 3 21:13:19.050: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:33.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9618" for this suite. • [SLOW TEST:14.933 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":18,"skipped":247,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:33.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 3 21:13:33.926: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4367 /api/v1/namespaces/watch-4367/configmaps/e2e-watch-test-watch-closed 89e791ac-f166-417f-acf9-f109c78747c3 6195967 0 2020-02-03 21:13:33 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 3 21:13:33.926: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4367 /api/v1/namespaces/watch-4367/configmaps/e2e-watch-test-watch-closed 89e791ac-f166-417f-acf9-f109c78747c3 6195968 0 2020-02-03 21:13:33 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 3 21:13:33.969: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4367 /api/v1/namespaces/watch-4367/configmaps/e2e-watch-test-watch-closed 89e791ac-f166-417f-acf9-f109c78747c3 6195969 0 2020-02-03 21:13:33 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 3 21:13:33.970: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4367 /api/v1/namespaces/watch-4367/configmaps/e2e-watch-test-watch-closed 89e791ac-f166-417f-acf9-f109c78747c3 6195970 0 2020-02-03 21:13:33 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:33.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4367" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":19,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:33.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Feb 3 21:13:34.047: INFO: Waiting up to 5m0s for pod "pod-5905d2fa-9412-4eca-8303-5458e493e6f8" in namespace "emptydir-2066" to be "success or failure" Feb 3 21:13:34.135: INFO: Pod "pod-5905d2fa-9412-4eca-8303-5458e493e6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 87.198412ms Feb 3 21:13:36.144: INFO: Pod "pod-5905d2fa-9412-4eca-8303-5458e493e6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09628401s Feb 3 21:13:38.148: INFO: Pod "pod-5905d2fa-9412-4eca-8303-5458e493e6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101011062s Feb 3 21:13:40.156: INFO: Pod "pod-5905d2fa-9412-4eca-8303-5458e493e6f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108198979s STEP: Saw pod success Feb 3 21:13:40.156: INFO: Pod "pod-5905d2fa-9412-4eca-8303-5458e493e6f8" satisfied condition "success or failure" Feb 3 21:13:40.189: INFO: Trying to get logs from node jerma-node pod pod-5905d2fa-9412-4eca-8303-5458e493e6f8 container test-container: STEP: delete the pod Feb 3 21:13:40.235: INFO: Waiting for pod pod-5905d2fa-9412-4eca-8303-5458e493e6f8 to disappear Feb 3 21:13:40.259: INFO: Pod pod-5905d2fa-9412-4eca-8303-5458e493e6f8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:40.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2066" for this suite. • [SLOW TEST:6.291 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":281,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:40.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Feb 3 21:13:40.582: INFO: Waiting up to 5m0s for pod "downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1" in namespace "downward-api-5306" to be "success or failure" Feb 3 21:13:40.588: INFO: Pod "downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.546488ms Feb 3 21:13:42.597: INFO: Pod "downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014416787s Feb 3 21:13:44.607: INFO: Pod "downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024958717s Feb 3 21:13:46.613: INFO: Pod "downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031151086s Feb 3 21:13:48.622: INFO: Pod "downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039616154s Feb 3 21:13:50.631: INFO: Pod "downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048857552s STEP: Saw pod success Feb 3 21:13:50.631: INFO: Pod "downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1" satisfied condition "success or failure" Feb 3 21:13:50.636: INFO: Trying to get logs from node jerma-node pod downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1 container dapi-container: STEP: delete the pod Feb 3 21:13:50.693: INFO: Waiting for pod downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1 to disappear Feb 3 21:13:50.706: INFO: Pod downward-api-a448d4e5-1cdb-4248-be7e-51b3632047f1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:50.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5306" for this suite. • [SLOW TEST:10.447 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":288,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:50.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f5450923-5ffe-41a2-a3cf-d79b83d5f278 STEP: Creating a pod to test consume secrets Feb 3 21:13:50.996: INFO: Waiting up to 5m0s for pod "pod-secrets-398c7624-397a-4655-a94b-3912db8502a9" in namespace "secrets-6804" to be "success or failure" Feb 3 21:13:51.002: INFO: Pod "pod-secrets-398c7624-397a-4655-a94b-3912db8502a9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.729889ms Feb 3 21:13:53.008: INFO: Pod "pod-secrets-398c7624-397a-4655-a94b-3912db8502a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011087495s Feb 3 21:13:55.015: INFO: Pod "pod-secrets-398c7624-397a-4655-a94b-3912db8502a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018935908s Feb 3 21:13:57.023: INFO: Pod "pod-secrets-398c7624-397a-4655-a94b-3912db8502a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026354577s STEP: Saw pod success Feb 3 21:13:57.023: INFO: Pod "pod-secrets-398c7624-397a-4655-a94b-3912db8502a9" satisfied condition "success or failure" Feb 3 21:13:57.027: INFO: Trying to get logs from node jerma-node pod pod-secrets-398c7624-397a-4655-a94b-3912db8502a9 container secret-env-test: STEP: delete the pod Feb 3 21:13:57.090: INFO: Waiting for pod pod-secrets-398c7624-397a-4655-a94b-3912db8502a9 to disappear Feb 3 21:13:57.100: INFO: Pod pod-secrets-398c7624-397a-4655-a94b-3912db8502a9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:57.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6804" for this suite. • [SLOW TEST:6.388 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":292,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:57.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 3 21:13:57.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3625' Feb 3 21:13:57.376: INFO: stderr: "" Feb 3 21:13:57.376: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Feb 3 21:14:07.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3625 -o json' Feb 3 21:14:07.646: INFO: stderr: "" Feb 3 21:14:07.646: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-03T21:13:57Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3625\",\n \"resourceVersion\": \"6196151\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3625/pods/e2e-test-httpd-pod\",\n \"uid\": \"fd63c000-acdd-4cb6-a654-fd8b42630e87\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-24qm4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-24qm4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-24qm4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-03T21:13:57Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-03T21:14:04Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-03T21:14:04Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-03T21:13:57Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://900f78377578a7f59efa2faa6f861317120a6592e12e7e74707410ced98dca98\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-03T21:14:03Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.250\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.1\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-03T21:13:57Z\"\n }\n}\n" STEP: replace the image in the pod Feb 3 21:14:07.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3625' Feb 3 21:14:08.177: INFO: stderr: "" Feb 3 21:14:08.177: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882 Feb 3 21:14:08.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3625' Feb 3 21:14:15.210: INFO: stderr: "" Feb 3 21:14:15.211: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:14:15.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3625" for this suite. • [SLOW TEST:18.117 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":23,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:14:15.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:14:15.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1231' Feb 3 21:14:15.841: INFO: stderr: "" Feb 3 21:14:15.841: INFO: stdout: "replicationcontroller/agnhost-master created\n" Feb 3 21:14:15.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1231' Feb 3 21:14:16.337: INFO: stderr: "" Feb 3 21:14:16.337: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 3 21:14:17.346: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:14:17.346: INFO: Found 0 / 1 Feb 3 21:14:18.346: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:14:18.347: INFO: Found 0 / 1 Feb 3 21:14:19.345: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:14:19.345: INFO: Found 0 / 1 Feb 3 21:14:20.344: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:14:20.344: INFO: Found 0 / 1 Feb 3 21:14:21.345: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:14:21.345: INFO: Found 0 / 1 Feb 3 21:14:22.345: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:14:22.345: INFO: Found 1 / 1 Feb 3 21:14:22.345: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 3 21:14:22.349: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:14:22.349: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 3 21:14:22.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-5lgh8 --namespace=kubectl-1231' Feb 3 21:14:22.564: INFO: stderr: "" Feb 3 21:14:22.564: INFO: stdout: "Name: agnhost-master-5lgh8\nNamespace: kubectl-1231\nPriority: 0\nNode: jerma-node/10.96.2.250\nStart Time: Mon, 03 Feb 2020 21:14:15 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nIPs:\n IP: 10.44.0.1\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: docker://61851ff92108f598d07cb381907a144064d83eb847a3f66e2d65126aea6e58ee\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 03 Feb 2020 21:14:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-xll68 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-xll68:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-xll68\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-1231/agnhost-master-5lgh8 to jerma-node\n Normal Pulled 4s kubelet, jerma-node Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-node Created container agnhost-master\n Normal Started 2s kubelet, jerma-node Started container agnhost-master\n" Feb 3 21:14:22.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1231' Feb 3 21:14:22.704: INFO: stderr: "" Feb 3 21:14:22.704: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1231\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-5lgh8\n" Feb 3 21:14:22.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1231' Feb 3 21:14:22.817: INFO: stderr: "" Feb 3 21:14:22.817: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1231\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.180.217\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Feb 3 21:14:22.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node' Feb 3 21:14:22.977: INFO: stderr: "" Feb 3 21:14:22.977: INFO: stdout: "Name: jerma-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 04 Jan 2020 11:59:52 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: jerma-node\n AcquireTime: \n RenewTime: Mon, 03 Feb 2020 21:14:18 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 04 Jan 2020 12:00:49 +0000 Sat, 04 Jan 2020 12:00:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Mon, 03 Feb 2020 21:09:25 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 03 Feb 2020 21:09:25 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 03 Feb 2020 21:09:25 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 03 Feb 2020 21:09:25 +0000 Sat, 04 Jan 2020 12:00:52 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.2.250\n Hostname: jerma-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: bdc16344252549dd902c3a5d68b22f41\n System UUID: BDC16344-2525-49DD-902C-3A5D68B22F41\n Boot ID: eec61fc4-8bf6-487f-8f93-ea9731fe757a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-dsf66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system weave-net-kz8lv 20m (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n kubectl-1231 agnhost-master-5lgh8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Feb 3 21:14:22.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1231' Feb 3 21:14:23.092: INFO: stderr: "" Feb 3 21:14:23.092: INFO: stdout: "Name: kubectl-1231\nLabels: e2e-framework=kubectl\n e2e-run=7e13b875-caae-4cce-b5dd-eaf74de2dd59\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:14:23.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1231" for this suite. • [SLOW TEST:7.877 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":24,"skipped":367,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:14:23.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4129 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 3 21:14:23.270: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 3 21:14:59.523: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4129 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:14:59.523: INFO: >>> kubeConfig: /root/.kube/config I0203 21:14:59.578291 8 log.go:172] (0xc002a68000) (0xc00225cb40) Create stream I0203 21:14:59.578379 8 log.go:172] (0xc002a68000) (0xc00225cb40) Stream added, broadcasting: 1 I0203 21:14:59.582811 8 log.go:172] (0xc002a68000) Reply frame received for 1 I0203 21:14:59.582878 8 log.go:172] (0xc002a68000) (0xc0023fa000) Create stream I0203 21:14:59.582898 8 log.go:172] (0xc002a68000) (0xc0023fa000) Stream added, broadcasting: 3 I0203 21:14:59.584402 8 log.go:172] (0xc002a68000) Reply frame received for 3 I0203 21:14:59.584440 8 log.go:172] (0xc002a68000) (0xc0023fa0a0) Create stream I0203 21:14:59.584454 8 log.go:172] (0xc002a68000) (0xc0023fa0a0) Stream added, broadcasting: 5 I0203 21:14:59.587516 8 log.go:172] (0xc002a68000) Reply frame received for 5 I0203 21:15:00.709106 8 log.go:172] (0xc002a68000) Data frame received for 3 I0203 21:15:00.709333 8 log.go:172] (0xc0023fa000) (3) Data frame handling I0203 21:15:00.709414 8 log.go:172] (0xc0023fa000) (3) Data frame sent I0203 21:15:00.831550 8 log.go:172] (0xc002a68000) Data frame received for 1 I0203 21:15:00.831658 8 log.go:172] (0xc002a68000) (0xc0023fa000) Stream removed, broadcasting: 3 I0203 21:15:00.831757 8 log.go:172] (0xc00225cb40) (1) Data frame handling I0203 21:15:00.831778 8 log.go:172] (0xc00225cb40) (1) Data frame sent I0203 21:15:00.831791 8 log.go:172] (0xc002a68000) (0xc00225cb40) Stream removed, broadcasting: 1 I0203 21:15:00.832441 8 log.go:172] (0xc002a68000) (0xc0023fa0a0) Stream removed, broadcasting: 5 I0203 21:15:00.832506 8 log.go:172] (0xc002a68000) (0xc00225cb40) Stream removed, broadcasting: 1 I0203 21:15:00.832527 8 log.go:172] (0xc002a68000) (0xc0023fa000) Stream removed, broadcasting: 3 I0203 21:15:00.832545 8 log.go:172] (0xc002a68000) (0xc0023fa0a0) Stream removed, broadcasting: 5 I0203 21:15:00.832916 8 log.go:172] (0xc002a68000) Go away received Feb 3 21:15:00.833: INFO: Found all expected endpoints: [netserver-0] Feb 3 21:15:00.840: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4129 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:15:00.841: INFO: >>> kubeConfig: /root/.kube/config I0203 21:15:00.913315 8 log.go:172] (0xc001ffa370) (0xc002a621e0) Create stream I0203 21:15:00.914372 8 log.go:172] (0xc001ffa370) (0xc002a621e0) Stream added, broadcasting: 1 I0203 21:15:00.933554 8 log.go:172] (0xc001ffa370) Reply frame received for 1 I0203 21:15:00.933917 8 log.go:172] (0xc001ffa370) (0xc002a62320) Create stream I0203 21:15:00.933995 8 log.go:172] (0xc001ffa370) (0xc002a62320) Stream added, broadcasting: 3 I0203 21:15:00.943381 8 log.go:172] (0xc001ffa370) Reply frame received for 3 I0203 21:15:00.943731 8 log.go:172] (0xc001ffa370) (0xc0023546e0) Create stream I0203 21:15:00.943787 8 log.go:172] (0xc001ffa370) (0xc0023546e0) Stream added, broadcasting: 5 I0203 21:15:00.953903 8 log.go:172] (0xc001ffa370) Reply frame received for 5 I0203 21:15:02.043960 8 log.go:172] (0xc001ffa370) Data frame received for 3 I0203 21:15:02.044049 8 log.go:172] (0xc002a62320) (3) Data frame handling I0203 21:15:02.044075 8 log.go:172] (0xc002a62320) (3) Data frame sent I0203 21:15:02.153366 8 log.go:172] (0xc001ffa370) Data frame received for 1 I0203 21:15:02.153654 8 log.go:172] (0xc002a621e0) (1) Data frame handling I0203 21:15:02.153795 8 log.go:172] (0xc002a621e0) (1) Data frame sent I0203 21:15:02.154764 8 log.go:172] (0xc001ffa370) (0xc002a621e0) Stream removed, broadcasting: 1 I0203 21:15:02.156027 8 log.go:172] (0xc001ffa370) (0xc002a62320) Stream removed, broadcasting: 3 I0203 21:15:02.156312 8 log.go:172] (0xc001ffa370) (0xc0023546e0) Stream removed, broadcasting: 5 I0203 21:15:02.156450 8 log.go:172] (0xc001ffa370) (0xc002a621e0) Stream removed, broadcasting: 1 I0203 21:15:02.156479 8 log.go:172] (0xc001ffa370) (0xc002a62320) Stream removed, broadcasting: 3 I0203 21:15:02.156498 8 log.go:172] (0xc001ffa370) (0xc0023546e0) Stream removed, broadcasting: 5 Feb 3 21:15:02.156: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:02.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0203 21:15:02.158942 8 log.go:172] (0xc001ffa370) Go away received STEP: Destroying namespace "pod-network-test-4129" for this suite. • [SLOW TEST:39.069 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":367,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:02.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:14.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9668" for this suite. • [SLOW TEST:12.742 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":26,"skipped":375,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:14.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:15:15.118: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 3 21:15:20.162: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 3 21:15:22.176: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 3 21:15:24.184: INFO: Creating deployment "test-rollover-deployment" Feb 3 21:15:24.201: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 3 21:15:26.217: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 3 21:15:26.227: INFO: Ensure that both replica sets have 1 created replica Feb 3 21:15:26.237: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 3 21:15:26.247: INFO: Updating deployment test-rollover-deployment Feb 3 21:15:26.247: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 3 21:15:28.266: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 3 21:15:28.277: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 3 21:15:28.285: INFO: all replica sets need to contain the pod-template-hash label Feb 3 21:15:28.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361326, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:15:30.297: INFO: all replica sets need to contain the pod-template-hash label Feb 3 21:15:30.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361326, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:15:32.298: INFO: all replica sets need to contain the pod-template-hash label Feb 3 21:15:32.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361326, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:15:34.305: INFO: all replica sets need to contain the pod-template-hash label Feb 3 21:15:34.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361326, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:15:36.295: INFO: all replica sets need to contain the pod-template-hash label Feb 3 21:15:36.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361335, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:15:38.297: INFO: all replica sets need to contain the pod-template-hash label Feb 3 21:15:38.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361335, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:15:40.308: INFO: all replica sets need to contain the pod-template-hash label Feb 3 21:15:40.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361335, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:15:42.319: INFO: all replica sets need to contain the pod-template-hash label Feb 3 21:15:42.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361335, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:15:44.296: INFO: all replica sets need to contain the pod-template-hash label Feb 3 21:15:44.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361335, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361324, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:15:46.305: INFO: Feb 3 21:15:46.306: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 3 21:15:46.327: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-61 /apis/apps/v1/namespaces/deployment-61/deployments/test-rollover-deployment e1bee706-4f33-4e9a-86b7-bf3210298bd4 6196596 2 2020-02-03 21:15:24 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ed8b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-03 21:15:24 +0000 UTC,LastTransitionTime:2020-02-03 21:15:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-03 21:15:45 +0000 UTC,LastTransitionTime:2020-02-03 21:15:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 3 21:15:46.337: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-61 /apis/apps/v1/namespaces/deployment-61/replicasets/test-rollover-deployment-574d6dfbff 7f60bc27-60de-4c4a-b792-8128c8271325 6196585 2 2020-02-03 21:15:26 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e1bee706-4f33-4e9a-86b7-bf3210298bd4 0xc000ed8fa7 0xc000ed8fa8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ed9018 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:15:46.337: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 3 21:15:46.338: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-61 /apis/apps/v1/namespaces/deployment-61/replicasets/test-rollover-controller f3b1b951-1599-45dc-9b0c-53aaf859ae84 6196594 2 2020-02-03 21:15:15 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e1bee706-4f33-4e9a-86b7-bf3210298bd4 0xc000ed8eaf 0xc000ed8ec0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000ed8f28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:15:46.338: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-61 /apis/apps/v1/namespaces/deployment-61/replicasets/test-rollover-deployment-f6c94f66c 4a76c97d-9bc5-4f59-b639-b7e5bf1013d2 6196530 2 2020-02-03 21:15:24 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e1bee706-4f33-4e9a-86b7-bf3210298bd4 0xc000ed9080 0xc000ed9081}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ed90f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:15:46.348: INFO: Pod "test-rollover-deployment-574d6dfbff-lvk92" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-lvk92 test-rollover-deployment-574d6dfbff- deployment-61 /api/v1/namespaces/deployment-61/pods/test-rollover-deployment-574d6dfbff-lvk92 7b88790d-dfa0-4f60-9f2e-7664f44cb504 6196559 0 2020-02-03 21:15:26 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 7f60bc27-60de-4c4a-b792-8128c8271325 0xc000ed9627 0xc000ed9628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h7jhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h7jhp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h7jhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:15:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:15:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-03 21:15:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 21:15:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://dec1c52213f05950e82c49f9aa21fb04235fef6d9a4b3c58f403d99c33912fc7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:46.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-61" for this suite. • [SLOW TEST:31.443 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":27,"skipped":378,"failed":0} [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:46.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:56.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5328" for this suite. • [SLOW TEST:10.278 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":378,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:56.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:15:56.788: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 3 21:15:56.877: INFO: Number of nodes with available pods: 0 Feb 3 21:15:56.878: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:15:57.892: INFO: Number of nodes with available pods: 0 Feb 3 21:15:57.892: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:15:59.075: INFO: Number of nodes with available pods: 0 Feb 3 21:15:59.075: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:16:00.009: INFO: Number of nodes with available pods: 0 Feb 3 21:16:00.009: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:16:00.889: INFO: Number of nodes with available pods: 0 Feb 3 21:16:00.890: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:16:01.920: INFO: Number of nodes with available pods: 0 Feb 3 21:16:01.920: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:16:04.682: INFO: Number of nodes with available pods: 0 Feb 3 21:16:04.683: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:16:05.676: INFO: Number of nodes with available pods: 1 Feb 3 21:16:05.677: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:05.950: INFO: Number of nodes with available pods: 1 Feb 3 21:16:05.950: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:06.893: INFO: Number of nodes with available pods: 2 Feb 3 21:16:06.893: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 3 21:16:06.986: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:06.986: INFO: Wrong image for pod: daemon-set-qpxh8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:08.004: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:08.004: INFO: Wrong image for pod: daemon-set-qpxh8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:09.010: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:09.010: INFO: Wrong image for pod: daemon-set-qpxh8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:10.003: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:10.003: INFO: Wrong image for pod: daemon-set-qpxh8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:11.005: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:11.005: INFO: Wrong image for pod: daemon-set-qpxh8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:11.005: INFO: Pod daemon-set-qpxh8 is not available Feb 3 21:16:12.005: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:12.005: INFO: Pod daemon-set-6rpmr is not available Feb 3 21:16:13.002: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:13.002: INFO: Pod daemon-set-6rpmr is not available Feb 3 21:16:14.005: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:14.005: INFO: Pod daemon-set-6rpmr is not available Feb 3 21:16:15.018: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:15.018: INFO: Pod daemon-set-6rpmr is not available Feb 3 21:16:16.003: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:16.003: INFO: Pod daemon-set-6rpmr is not available Feb 3 21:16:17.004: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:17.004: INFO: Pod daemon-set-6rpmr is not available Feb 3 21:16:18.004: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:19.003: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:20.005: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:21.027: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:21.027: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:22.006: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:22.006: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:23.007: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:23.007: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:24.006: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:24.006: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:25.005: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:25.005: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:26.003: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:26.004: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:27.007: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:27.007: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:28.007: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:28.007: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:29.003: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:29.003: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:30.005: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:30.006: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:31.006: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:31.006: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:32.004: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:32.004: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:33.006: INFO: Wrong image for pod: daemon-set-25czl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 21:16:33.006: INFO: Pod daemon-set-25czl is not available Feb 3 21:16:34.254: INFO: Pod daemon-set-2lrxx is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 3 21:16:34.605: INFO: Number of nodes with available pods: 1 Feb 3 21:16:34.605: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:35.615: INFO: Number of nodes with available pods: 1 Feb 3 21:16:35.615: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:36.696: INFO: Number of nodes with available pods: 1 Feb 3 21:16:36.696: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:38.405: INFO: Number of nodes with available pods: 1 Feb 3 21:16:38.405: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:39.060: INFO: Number of nodes with available pods: 1 Feb 3 21:16:39.060: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:39.927: INFO: Number of nodes with available pods: 1 Feb 3 21:16:39.928: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:40.618: INFO: Number of nodes with available pods: 1 Feb 3 21:16:40.618: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:41.619: INFO: Number of nodes with available pods: 1 Feb 3 21:16:41.619: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:16:42.645: INFO: Number of nodes with available pods: 2 Feb 3 21:16:42.645: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2314, will wait for the garbage collector to delete the pods Feb 3 21:16:42.764: INFO: Deleting DaemonSet.extensions daemon-set took: 13.213056ms Feb 3 21:16:43.065: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.905524ms Feb 3 21:16:52.473: INFO: Number of nodes with available pods: 0 Feb 3 21:16:52.473: INFO: Number of running nodes: 0, number of available pods: 0 Feb 3 21:16:52.476: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2314/daemonsets","resourceVersion":"6196880"},"items":null} Feb 3 21:16:52.479: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2314/pods","resourceVersion":"6196880"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:16:52.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2314" for this suite. • [SLOW TEST:55.857 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":29,"skipped":395,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:16:52.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0203 21:17:22.683107 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 3 21:17:22.683: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:17:22.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6008" for this suite. • [SLOW TEST:30.197 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":30,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:17:22.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:17:22.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Feb 3 21:17:23.705: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-03T21:17:23Z generation:1 name:name1 resourceVersion:6197026 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7b542f8e-15c9-4632-92c4-d3f4f65d78f7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Feb 3 21:17:33.717: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-03T21:17:33Z generation:1 name:name2 resourceVersion:6197067 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e63a38c9-fb9b-42b8-a3f6-8b68904da9c9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Feb 3 21:17:43.726: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-03T21:17:23Z generation:2 name:name1 resourceVersion:6197087 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7b542f8e-15c9-4632-92c4-d3f4f65d78f7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Feb 3 21:17:53.737: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-03T21:17:33Z generation:2 name:name2 resourceVersion:6197111 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e63a38c9-fb9b-42b8-a3f6-8b68904da9c9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Feb 3 21:18:04.192: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-03T21:17:23Z generation:2 name:name1 resourceVersion:6197135 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7b542f8e-15c9-4632-92c4-d3f4f65d78f7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Feb 3 21:18:14.208: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-03T21:17:33Z generation:2 name:name2 resourceVersion:6197159 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e63a38c9-fb9b-42b8-a3f6-8b68904da9c9] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:18:24.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4767" for this suite. • [SLOW TEST:62.056 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":31,"skipped":436,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:18:24.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 3 21:18:25.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6774' Feb 3 21:18:25.952: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 3 21:18:25.953: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Feb 3 21:18:26.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6774' Feb 3 21:18:26.273: INFO: stderr: "" Feb 3 21:18:26.273: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:18:26.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6774" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":32,"skipped":437,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:18:26.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3311 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 3 21:18:26.400: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 3 21:18:58.740: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3311 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:18:58.740: INFO: >>> kubeConfig: /root/.kube/config I0203 21:18:58.795466 8 log.go:172] (0xc002a68000) (0xc0023fa5a0) Create stream I0203 21:18:58.795543 8 log.go:172] (0xc002a68000) (0xc0023fa5a0) Stream added, broadcasting: 1 I0203 21:18:58.799985 8 log.go:172] (0xc002a68000) Reply frame received for 1 I0203 21:18:58.800065 8 log.go:172] (0xc002a68000) (0xc00121b540) Create stream I0203 21:18:58.800098 8 log.go:172] (0xc002a68000) (0xc00121b540) Stream added, broadcasting: 3 I0203 21:18:58.802371 8 log.go:172] (0xc002a68000) Reply frame received for 3 I0203 21:18:58.802419 8 log.go:172] (0xc002a68000) (0xc0022de5a0) Create stream I0203 21:18:58.802441 8 log.go:172] (0xc002a68000) (0xc0022de5a0) Stream added, broadcasting: 5 I0203 21:18:58.805224 8 log.go:172] (0xc002a68000) Reply frame received for 5 I0203 21:18:58.938716 8 log.go:172] (0xc002a68000) Data frame received for 3 I0203 21:18:58.938817 8 log.go:172] (0xc00121b540) (3) Data frame handling I0203 21:18:58.938850 8 log.go:172] (0xc00121b540) (3) Data frame sent I0203 21:18:59.024412 8 log.go:172] (0xc002a68000) (0xc00121b540) Stream removed, broadcasting: 3 I0203 21:18:59.024697 8 log.go:172] (0xc002a68000) Data frame received for 1 I0203 21:18:59.024804 8 log.go:172] (0xc002a68000) (0xc0022de5a0) Stream removed, broadcasting: 5 I0203 21:18:59.024917 8 log.go:172] (0xc0023fa5a0) (1) Data frame handling I0203 21:18:59.024956 8 log.go:172] (0xc0023fa5a0) (1) Data frame sent I0203 21:18:59.024975 8 log.go:172] (0xc002a68000) (0xc0023fa5a0) Stream removed, broadcasting: 1 I0203 21:18:59.025006 8 log.go:172] (0xc002a68000) Go away received I0203 21:18:59.025452 8 log.go:172] (0xc002a68000) (0xc0023fa5a0) Stream removed, broadcasting: 1 I0203 21:18:59.025473 8 log.go:172] (0xc002a68000) (0xc00121b540) Stream removed, broadcasting: 3 I0203 21:18:59.025489 8 log.go:172] (0xc002a68000) (0xc0022de5a0) Stream removed, broadcasting: 5 Feb 3 21:18:59.025: INFO: Waiting for responses: map[] Feb 3 21:18:59.031: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3311 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:18:59.031: INFO: >>> kubeConfig: /root/.kube/config I0203 21:18:59.076812 8 log.go:172] (0xc002660b00) (0xc002452460) Create stream I0203 21:18:59.077007 8 log.go:172] (0xc002660b00) (0xc002452460) Stream added, broadcasting: 1 I0203 21:18:59.081931 8 log.go:172] (0xc002660b00) Reply frame received for 1 I0203 21:18:59.081956 8 log.go:172] (0xc002660b00) (0xc00225c0a0) Create stream I0203 21:18:59.081967 8 log.go:172] (0xc002660b00) (0xc00225c0a0) Stream added, broadcasting: 3 I0203 21:18:59.083550 8 log.go:172] (0xc002660b00) Reply frame received for 3 I0203 21:18:59.083592 8 log.go:172] (0xc002660b00) (0xc0023fa6e0) Create stream I0203 21:18:59.083604 8 log.go:172] (0xc002660b00) (0xc0023fa6e0) Stream added, broadcasting: 5 I0203 21:18:59.087460 8 log.go:172] (0xc002660b00) Reply frame received for 5 I0203 21:18:59.155665 8 log.go:172] (0xc002660b00) Data frame received for 3 I0203 21:18:59.155726 8 log.go:172] (0xc00225c0a0) (3) Data frame handling I0203 21:18:59.155741 8 log.go:172] (0xc00225c0a0) (3) Data frame sent I0203 21:18:59.212864 8 log.go:172] (0xc002660b00) (0xc00225c0a0) Stream removed, broadcasting: 3 I0203 21:18:59.213270 8 log.go:172] (0xc002660b00) Data frame received for 1 I0203 21:18:59.213505 8 log.go:172] (0xc002452460) (1) Data frame handling I0203 21:18:59.213605 8 log.go:172] (0xc002452460) (1) Data frame sent I0203 21:18:59.213662 8 log.go:172] (0xc002660b00) (0xc002452460) Stream removed, broadcasting: 1 I0203 21:18:59.214238 8 log.go:172] (0xc002660b00) (0xc0023fa6e0) Stream removed, broadcasting: 5 I0203 21:18:59.214277 8 log.go:172] (0xc002660b00) Go away received I0203 21:18:59.214393 8 log.go:172] (0xc002660b00) (0xc002452460) Stream removed, broadcasting: 1 I0203 21:18:59.214424 8 log.go:172] (0xc002660b00) (0xc00225c0a0) Stream removed, broadcasting: 3 I0203 21:18:59.214464 8 log.go:172] (0xc002660b00) (0xc0023fa6e0) Stream removed, broadcasting: 5 Feb 3 21:18:59.214: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:18:59.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3311" for this suite. • [SLOW TEST:32.942 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":458,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:18:59.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 3 21:18:59.508: INFO: Number of nodes with available pods: 0 Feb 3 21:18:59.508: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:19:01.947: INFO: Number of nodes with available pods: 0 Feb 3 21:19:01.947: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:19:02.527: INFO: Number of nodes with available pods: 0 Feb 3 21:19:02.527: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:19:03.523: INFO: Number of nodes with available pods: 0 Feb 3 21:19:03.523: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:19:05.600: INFO: Number of nodes with available pods: 0 Feb 3 21:19:05.600: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:19:07.881: INFO: Number of nodes with available pods: 0 Feb 3 21:19:07.882: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:19:09.182: INFO: Number of nodes with available pods: 0 Feb 3 21:19:09.182: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:19:09.566: INFO: Number of nodes with available pods: 1 Feb 3 21:19:09.566: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:10.968: INFO: Number of nodes with available pods: 1 Feb 3 21:19:10.968: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:11.716: INFO: Number of nodes with available pods: 1 Feb 3 21:19:11.717: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:12.678: INFO: Number of nodes with available pods: 2 Feb 3 21:19:12.678: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 3 21:19:12.859: INFO: Number of nodes with available pods: 1 Feb 3 21:19:12.859: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:16.009: INFO: Number of nodes with available pods: 1 Feb 3 21:19:16.009: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:17.334: INFO: Number of nodes with available pods: 1 Feb 3 21:19:17.334: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:18.124: INFO: Number of nodes with available pods: 1 Feb 3 21:19:18.124: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:18.874: INFO: Number of nodes with available pods: 1 Feb 3 21:19:18.874: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:20.140: INFO: Number of nodes with available pods: 1 Feb 3 21:19:20.141: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:20.878: INFO: Number of nodes with available pods: 1 Feb 3 21:19:20.878: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:22.462: INFO: Number of nodes with available pods: 1 Feb 3 21:19:22.462: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:23.013: INFO: Number of nodes with available pods: 1 Feb 3 21:19:23.013: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:23.893: INFO: Number of nodes with available pods: 1 Feb 3 21:19:23.893: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 3 21:19:24.881: INFO: Number of nodes with available pods: 2 Feb 3 21:19:24.881: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7007, will wait for the garbage collector to delete the pods Feb 3 21:19:24.965: INFO: Deleting DaemonSet.extensions daemon-set took: 13.903286ms Feb 3 21:19:25.366: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.992633ms Feb 3 21:19:43.171: INFO: Number of nodes with available pods: 0 Feb 3 21:19:43.171: INFO: Number of running nodes: 0, number of available pods: 0 Feb 3 21:19:43.175: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7007/daemonsets","resourceVersion":"6197505"},"items":null} Feb 3 21:19:43.177: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7007/pods","resourceVersion":"6197505"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:19:43.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7007" for this suite. • [SLOW TEST:43.969 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":34,"skipped":480,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:19:43.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:19:43.592: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 3 21:19:48.604: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 3 21:19:50.629: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 3 21:19:58.769: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9236 /apis/apps/v1/namespaces/deployment-9236/deployments/test-cleanup-deployment c583ab7b-4c6f-4466-9764-d0aa9a2a9d1b 6197608 1 2020-02-03 21:19:50 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ed9bd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-03 21:19:50 +0000 UTC,LastTransitionTime:2020-02-03 21:19:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-02-03 21:19:56 +0000 UTC,LastTransitionTime:2020-02-03 21:19:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 3 21:19:58.772: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-9236 /apis/apps/v1/namespaces/deployment-9236/replicasets/test-cleanup-deployment-55ffc6b7b6 e0b71ea3-821c-42f6-b80b-889793b41a25 6197597 1 2020-02-03 21:19:50 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c583ab7b-4c6f-4466-9764-d0aa9a2a9d1b 0xc000ed9fa7 0xc000ed9fa8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000670c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:19:58.777: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-4xkj9" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-4xkj9 test-cleanup-deployment-55ffc6b7b6- deployment-9236 /api/v1/namespaces/deployment-9236/pods/test-cleanup-deployment-55ffc6b7b6-4xkj9 c62d4a29-466f-437b-b154-8f197f41bd32 6197596 0 2020-02-03 21:19:50 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 e0b71ea3-821c-42f6-b80b-889793b41a25 0xc0004a8597 0xc0004a8598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z5qlv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z5qlv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z5qlv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:19:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:19:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:19:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:19:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-03 21:19:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 21:19:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://100750558f6b545faf0a77d689b4e26124a8fd1591e46e10ae5c7a61d5f65825,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:19:58.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9236" for this suite. • [SLOW TEST:15.589 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":35,"skipped":481,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:19:58.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-3305666f-c493-44b1-aa60-ceebddc8a7fa STEP: Creating a pod to test consume secrets Feb 3 21:19:59.000: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa" in namespace "projected-8256" to be "success or failure" Feb 3 21:19:59.030: INFO: Pod "pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa": Phase="Pending", Reason="", readiness=false. Elapsed: 29.803ms Feb 3 21:20:01.035: INFO: Pod "pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034858548s Feb 3 21:20:03.041: INFO: Pod "pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040398932s Feb 3 21:20:05.047: INFO: Pod "pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047137566s Feb 3 21:20:07.053: INFO: Pod "pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052430955s STEP: Saw pod success Feb 3 21:20:07.053: INFO: Pod "pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa" satisfied condition "success or failure" Feb 3 21:20:07.055: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa container projected-secret-volume-test: STEP: delete the pod Feb 3 21:20:07.280: INFO: Waiting for pod pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa to disappear Feb 3 21:20:07.295: INFO: Pod pod-projected-secrets-1071ea3e-59a9-47cb-ba8e-1968e65ed4aa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:20:07.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8256" for this suite. • [SLOW TEST:8.552 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":488,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:20:07.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Feb 3 21:20:14.042: INFO: Successfully updated pod "labelsupdateb4e39428-08ca-40b4-ae0c-c087e505f36f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:20:18.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-213" for this suite. • [SLOW TEST:10.779 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:20:18.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:20:18.275: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432" in namespace "downward-api-6126" to be "success or failure" Feb 3 21:20:18.310: INFO: Pod "downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432": Phase="Pending", Reason="", readiness=false. Elapsed: 35.536649ms Feb 3 21:20:20.317: INFO: Pod "downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04204456s Feb 3 21:20:22.324: INFO: Pod "downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049740173s Feb 3 21:20:24.332: INFO: Pod "downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057157093s Feb 3 21:20:26.351: INFO: Pod "downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076029256s Feb 3 21:20:28.359: INFO: Pod "downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084100082s STEP: Saw pod success Feb 3 21:20:28.359: INFO: Pod "downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432" satisfied condition "success or failure" Feb 3 21:20:28.363: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432 container client-container: STEP: delete the pod Feb 3 21:20:28.419: INFO: Waiting for pod downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432 to disappear Feb 3 21:20:28.426: INFO: Pod downwardapi-volume-68a0f126-ed87-4b2c-b415-d16b5cd50432 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:20:28.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6126" for this suite. • [SLOW TEST:10.370 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":523,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:20:28.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0203 21:20:30.766221 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 3 21:20:30.766: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:20:30.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-277" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":39,"skipped":535,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:20:30.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-70ca2ad5-e9f5-47cd-8f39-1fe22260727b STEP: Creating a pod to test consume secrets Feb 3 21:20:32.633: INFO: Waiting up to 5m0s for pod "pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093" in namespace "secrets-6286" to be "success or failure" Feb 3 21:20:32.828: INFO: Pod "pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093": Phase="Pending", Reason="", readiness=false. Elapsed: 194.764033ms Feb 3 21:20:34.833: INFO: Pod "pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199232321s Feb 3 21:20:36.947: INFO: Pod "pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313816335s Feb 3 21:20:38.995: INFO: Pod "pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361858059s Feb 3 21:20:41.001: INFO: Pod "pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.367577145s STEP: Saw pod success Feb 3 21:20:41.001: INFO: Pod "pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093" satisfied condition "success or failure" Feb 3 21:20:41.004: INFO: Trying to get logs from node jerma-node pod pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093 container secret-volume-test: STEP: delete the pod Feb 3 21:20:41.044: INFO: Waiting for pod pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093 to disappear Feb 3 21:20:41.075: INFO: Pod pod-secrets-aa3aca67-fb05-4665-a1cc-38b937dc5093 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:20:41.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6286" for this suite. • [SLOW TEST:10.296 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:20:41.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:21:07.227: INFO: Container started at 2020-02-03 21:20:46 +0000 UTC, pod became ready at 2020-02-03 21:21:05 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:21:07.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6994" for this suite. • [SLOW TEST:26.129 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":585,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:21:07.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:21:09.303: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:21:11.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:21:13.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:21:15.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361669, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:21:18.373: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:21:19.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4381" for this suite. STEP: Destroying namespace "webhook-4381-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.107 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":42,"skipped":593,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:21:19.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:21:19.491: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 3 21:21:19.520: INFO: Number of nodes with available pods: 0 Feb 3 21:21:19.520: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 3 21:21:19.596: INFO: Number of nodes with available pods: 0 Feb 3 21:21:19.596: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:20.610: INFO: Number of nodes with available pods: 0 Feb 3 21:21:20.611: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:21.604: INFO: Number of nodes with available pods: 0 Feb 3 21:21:21.604: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:22.605: INFO: Number of nodes with available pods: 0 Feb 3 21:21:22.605: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:23.607: INFO: Number of nodes with available pods: 0 Feb 3 21:21:23.607: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:24.630: INFO: Number of nodes with available pods: 0 Feb 3 21:21:24.630: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:25.608: INFO: Number of nodes with available pods: 0 Feb 3 21:21:25.608: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:26.611: INFO: Number of nodes with available pods: 0 Feb 3 21:21:26.611: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:27.606: INFO: Number of nodes with available pods: 1 Feb 3 21:21:27.606: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 3 21:21:27.661: INFO: Number of nodes with available pods: 1 Feb 3 21:21:27.661: INFO: Number of running nodes: 0, number of available pods: 1 Feb 3 21:21:28.673: INFO: Number of nodes with available pods: 0 Feb 3 21:21:28.673: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 3 21:21:28.720: INFO: Number of nodes with available pods: 0 Feb 3 21:21:28.720: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:29.727: INFO: Number of nodes with available pods: 0 Feb 3 21:21:29.727: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:30.728: INFO: Number of nodes with available pods: 0 Feb 3 21:21:30.729: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:31.733: INFO: Number of nodes with available pods: 0 Feb 3 21:21:31.733: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:32.727: INFO: Number of nodes with available pods: 0 Feb 3 21:21:32.727: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:33.738: INFO: Number of nodes with available pods: 0 Feb 3 21:21:33.738: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:34.725: INFO: Number of nodes with available pods: 0 Feb 3 21:21:34.725: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:35.728: INFO: Number of nodes with available pods: 0 Feb 3 21:21:35.728: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:36.733: INFO: Number of nodes with available pods: 0 Feb 3 21:21:36.733: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:37.731: INFO: Number of nodes with available pods: 0 Feb 3 21:21:37.731: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:38.728: INFO: Number of nodes with available pods: 0 Feb 3 21:21:38.728: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:40.137: INFO: Number of nodes with available pods: 0 Feb 3 21:21:40.137: INFO: Node jerma-node is running more than one daemon pod Feb 3 21:21:40.728: INFO: Number of nodes with available pods: 1 Feb 3 21:21:40.728: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6147, will wait for the garbage collector to delete the pods Feb 3 21:21:40.801: INFO: Deleting DaemonSet.extensions daemon-set took: 10.71687ms Feb 3 21:21:41.101: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.44405ms Feb 3 21:21:52.414: INFO: Number of nodes with available pods: 0 Feb 3 21:21:52.414: INFO: Number of running nodes: 0, number of available pods: 0 Feb 3 21:21:52.421: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6147/daemonsets","resourceVersion":"6198183"},"items":null} Feb 3 21:21:52.425: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6147/pods","resourceVersion":"6198183"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:21:52.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6147" for this suite. • [SLOW TEST:33.128 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":43,"skipped":595,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:21:52.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:21:52.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2" in namespace "projected-5309" to be "success or failure" Feb 3 21:21:52.721: INFO: Pod "downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 40.371408ms Feb 3 21:21:54.728: INFO: Pod "downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047733053s Feb 3 21:21:56.736: INFO: Pod "downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054911279s Feb 3 21:21:58.743: INFO: Pod "downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06240063s Feb 3 21:22:00.754: INFO: Pod "downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073683039s STEP: Saw pod success Feb 3 21:22:00.755: INFO: Pod "downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2" satisfied condition "success or failure" Feb 3 21:22:00.758: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2 container client-container: STEP: delete the pod Feb 3 21:22:00.802: INFO: Waiting for pod downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2 to disappear Feb 3 21:22:00.806: INFO: Pod downwardapi-volume-0bbfacf9-02e0-44eb-bb6f-fb14083ca6f2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:22:00.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5309" for this suite. • [SLOW TEST:8.351 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":602,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:22:00.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 3 21:22:13.141: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:22:13.156: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:22:15.156: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:22:15.161: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:22:17.156: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:22:17.162: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:22:19.156: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:22:19.164: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:22:21.156: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:22:21.163: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:22:23.156: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:22:23.163: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:22:23.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9724" for this suite. • [SLOW TEST:22.372 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":610,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:22:23.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-1ae9d206-87e2-435e-b042-670c230bea9d STEP: Creating a pod to test consume secrets Feb 3 21:22:23.354: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895" in namespace "projected-2537" to be "success or failure" Feb 3 21:22:23.426: INFO: Pod "pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895": Phase="Pending", Reason="", readiness=false. Elapsed: 71.799878ms Feb 3 21:22:25.432: INFO: Pod "pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07822392s Feb 3 21:22:27.440: INFO: Pod "pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085978588s Feb 3 21:22:29.450: INFO: Pod "pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095734709s Feb 3 21:22:31.459: INFO: Pod "pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10532972s STEP: Saw pod success Feb 3 21:22:31.459: INFO: Pod "pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895" satisfied condition "success or failure" Feb 3 21:22:31.464: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895 container projected-secret-volume-test: STEP: delete the pod Feb 3 21:22:31.628: INFO: Waiting for pod pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895 to disappear Feb 3 21:22:31.632: INFO: Pod pod-projected-secrets-584a9f0f-b37b-4250-8c7e-24aace9bd895 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:22:31.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2537" for this suite. • [SLOW TEST:8.451 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":615,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:22:31.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Feb 3 21:22:31.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 3 21:22:32.088: INFO: stderr: "" Feb 3 21:22:32.088: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:22:32.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6036" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":47,"skipped":628,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:22:32.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-8f794b55-9912-49d1-aed8-21bff6df26cd STEP: Creating secret with name s-test-opt-upd-cbdd49a2-5258-495b-9fb5-550c77ccb375 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8f794b55-9912-49d1-aed8-21bff6df26cd STEP: Updating secret s-test-opt-upd-cbdd49a2-5258-495b-9fb5-550c77ccb375 STEP: Creating secret with name s-test-opt-create-1454dd02-c163-410c-98f9-735e3d919a4d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:22:44.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7710" for this suite. • [SLOW TEST:12.574 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":644,"failed":0} [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:22:44.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-578.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-578.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-578.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-578.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-578.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-578.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 21:22:56.885: INFO: DNS probes using dns-578/dns-test-7b445840-7d0b-4354-bfdf-8c77c78b26f4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:22:56.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-578" for this suite. • [SLOW TEST:12.421 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":49,"skipped":644,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:22:57.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 3 21:22:57.353: INFO: Waiting up to 5m0s for pod "pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d" in namespace "emptydir-3026" to be "success or failure" Feb 3 21:22:57.466: INFO: Pod "pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d": Phase="Pending", Reason="", readiness=false. Elapsed: 113.320101ms Feb 3 21:22:59.472: INFO: Pod "pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119124241s Feb 3 21:23:01.481: INFO: Pod "pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127561822s Feb 3 21:23:03.488: INFO: Pod "pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135102816s Feb 3 21:23:05.495: INFO: Pod "pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142044422s Feb 3 21:23:07.502: INFO: Pod "pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148664418s STEP: Saw pod success Feb 3 21:23:07.502: INFO: Pod "pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d" satisfied condition "success or failure" Feb 3 21:23:07.506: INFO: Trying to get logs from node jerma-node pod pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d container test-container: STEP: delete the pod Feb 3 21:23:07.631: INFO: Waiting for pod pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d to disappear Feb 3 21:23:07.638: INFO: Pod pod-dc7e25bd-4a2d-4726-b901-0cd920a2b65d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:23:07.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3026" for this suite. • [SLOW TEST:10.534 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":647,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:23:07.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-03a8237e-fb84-49cd-b2e4-2cbe9659f489 STEP: Creating a pod to test consume configMaps Feb 3 21:23:07.831: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94" in namespace "projected-3533" to be "success or failure" Feb 3 21:23:07.839: INFO: Pod "pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94": Phase="Pending", Reason="", readiness=false. Elapsed: 7.145202ms Feb 3 21:23:09.851: INFO: Pod "pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019236317s Feb 3 21:23:11.863: INFO: Pod "pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031748377s Feb 3 21:23:13.882: INFO: Pod "pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050049146s Feb 3 21:23:15.892: INFO: Pod "pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060036748s STEP: Saw pod success Feb 3 21:23:15.892: INFO: Pod "pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94" satisfied condition "success or failure" Feb 3 21:23:15.897: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94 container projected-configmap-volume-test: STEP: delete the pod Feb 3 21:23:15.947: INFO: Waiting for pod pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94 to disappear Feb 3 21:23:15.995: INFO: Pod pod-projected-configmaps-23ca0600-a2c9-4347-970f-2c6f763ecc94 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:23:15.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3533" for this suite. • [SLOW TEST:8.355 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:23:16.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:23:33.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5040" for this suite. • [SLOW TEST:17.257 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":52,"skipped":687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:23:33.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Feb 3 21:23:33.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5887' Feb 3 21:23:35.642: INFO: stderr: "" Feb 3 21:23:35.643: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 21:23:35.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5887' Feb 3 21:23:35.914: INFO: stderr: "" Feb 3 21:23:35.914: INFO: stdout: "update-demo-nautilus-79wgn update-demo-nautilus-zgpls " Feb 3 21:23:35.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79wgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:23:36.049: INFO: stderr: "" Feb 3 21:23:36.049: INFO: stdout: "" Feb 3 21:23:36.049: INFO: update-demo-nautilus-79wgn is created but not running Feb 3 21:23:41.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5887' Feb 3 21:23:41.892: INFO: stderr: "" Feb 3 21:23:41.892: INFO: stdout: "update-demo-nautilus-79wgn update-demo-nautilus-zgpls " Feb 3 21:23:41.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79wgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:23:42.681: INFO: stderr: "" Feb 3 21:23:42.681: INFO: stdout: "" Feb 3 21:23:42.682: INFO: update-demo-nautilus-79wgn is created but not running Feb 3 21:23:47.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5887' Feb 3 21:23:47.879: INFO: stderr: "" Feb 3 21:23:47.880: INFO: stdout: "update-demo-nautilus-79wgn update-demo-nautilus-zgpls " Feb 3 21:23:47.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79wgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:23:48.056: INFO: stderr: "" Feb 3 21:23:48.056: INFO: stdout: "true" Feb 3 21:23:48.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79wgn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:23:48.172: INFO: stderr: "" Feb 3 21:23:48.172: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 21:23:48.172: INFO: validating pod update-demo-nautilus-79wgn Feb 3 21:23:48.198: INFO: got data: { "image": "nautilus.jpg" } Feb 3 21:23:48.198: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 21:23:48.198: INFO: update-demo-nautilus-79wgn is verified up and running Feb 3 21:23:48.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgpls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:23:48.281: INFO: stderr: "" Feb 3 21:23:48.281: INFO: stdout: "true" Feb 3 21:23:48.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgpls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:23:48.366: INFO: stderr: "" Feb 3 21:23:48.366: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 21:23:48.366: INFO: validating pod update-demo-nautilus-zgpls Feb 3 21:23:48.372: INFO: got data: { "image": "nautilus.jpg" } Feb 3 21:23:48.372: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 21:23:48.372: INFO: update-demo-nautilus-zgpls is verified up and running STEP: rolling-update to new replication controller Feb 3 21:23:48.377: INFO: scanned /root for discovery docs: Feb 3 21:23:48.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5887' Feb 3 21:24:17.667: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 3 21:24:17.667: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 21:24:17.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5887' Feb 3 21:24:17.945: INFO: stderr: "" Feb 3 21:24:17.945: INFO: stdout: "update-demo-kitten-72flh update-demo-kitten-rvkn9 " Feb 3 21:24:17.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-72flh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:24:18.081: INFO: stderr: "" Feb 3 21:24:18.081: INFO: stdout: "true" Feb 3 21:24:18.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-72flh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:24:18.182: INFO: stderr: "" Feb 3 21:24:18.183: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 3 21:24:18.183: INFO: validating pod update-demo-kitten-72flh Feb 3 21:24:18.208: INFO: got data: { "image": "kitten.jpg" } Feb 3 21:24:18.208: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 3 21:24:18.208: INFO: update-demo-kitten-72flh is verified up and running Feb 3 21:24:18.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rvkn9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:24:18.298: INFO: stderr: "" Feb 3 21:24:18.298: INFO: stdout: "true" Feb 3 21:24:18.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rvkn9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5887' Feb 3 21:24:18.443: INFO: stderr: "" Feb 3 21:24:18.443: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 3 21:24:18.443: INFO: validating pod update-demo-kitten-rvkn9 Feb 3 21:24:18.460: INFO: got data: { "image": "kitten.jpg" } Feb 3 21:24:18.460: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 3 21:24:18.460: INFO: update-demo-kitten-rvkn9 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:24:18.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5887" for this suite. • [SLOW TEST:45.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":53,"skipped":724,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:24:18.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:24:18.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae" in namespace "projected-1391" to be "success or failure" Feb 3 21:24:18.613: INFO: Pod "downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 20.273272ms Feb 3 21:24:20.623: INFO: Pod "downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030189174s Feb 3 21:24:22.630: INFO: Pod "downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037432638s Feb 3 21:24:25.157: INFO: Pod "downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.564527398s Feb 3 21:24:28.157: INFO: Pod "downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.564724766s STEP: Saw pod success Feb 3 21:24:28.158: INFO: Pod "downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae" satisfied condition "success or failure" Feb 3 21:24:28.501: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae container client-container: STEP: delete the pod Feb 3 21:24:28.705: INFO: Waiting for pod downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae to disappear Feb 3 21:24:28.715: INFO: Pod downwardapi-volume-9a77f6da-96a8-4350-a6f4-e3cdb9d9d3ae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:24:28.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1391" for this suite. • [SLOW TEST:10.258 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":746,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:24:28.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 3 21:24:28.945: INFO: Waiting up to 5m0s for pod "pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3" in namespace "emptydir-1152" to be "success or failure" Feb 3 21:24:28.964: INFO: Pod "pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.639198ms Feb 3 21:24:30.973: INFO: Pod "pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027852533s Feb 3 21:24:32.978: INFO: Pod "pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032225606s Feb 3 21:24:35.034: INFO: Pod "pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087954562s Feb 3 21:24:37.040: INFO: Pod "pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094054342s STEP: Saw pod success Feb 3 21:24:37.040: INFO: Pod "pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3" satisfied condition "success or failure" Feb 3 21:24:37.043: INFO: Trying to get logs from node jerma-node pod pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3 container test-container: STEP: delete the pod Feb 3 21:24:37.086: INFO: Waiting for pod pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3 to disappear Feb 3 21:24:37.092: INFO: Pod pod-550c7f38-26d4-4d76-bc84-0a2bf9fcbab3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:24:37.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1152" for this suite. • [SLOW TEST:8.370 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":751,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:24:37.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:24:38.407: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:24:40.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:24:42.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:24:44.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716361878, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:24:47.589: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:24:48.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9726" for this suite. STEP: Destroying namespace "webhook-9726-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.005 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":56,"skipped":761,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:24:49.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Feb 3 21:24:49.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8512 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 3 21:24:58.116: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0203 21:24:57.262653 811 log.go:172] (0xc000108c60) (0xc0006c5cc0) Create stream\nI0203 21:24:57.263123 811 log.go:172] (0xc000108c60) (0xc0006c5cc0) Stream added, broadcasting: 1\nI0203 21:24:57.268749 811 log.go:172] (0xc000108c60) Reply frame received for 1\nI0203 21:24:57.268822 811 log.go:172] (0xc000108c60) (0xc000b420a0) Create stream\nI0203 21:24:57.268843 811 log.go:172] (0xc000108c60) (0xc000b420a0) Stream added, broadcasting: 3\nI0203 21:24:57.272447 811 log.go:172] (0xc000108c60) Reply frame received for 3\nI0203 21:24:57.272551 811 log.go:172] (0xc000108c60) (0xc0006c5d60) Create stream\nI0203 21:24:57.272576 811 log.go:172] (0xc000108c60) (0xc0006c5d60) Stream added, broadcasting: 5\nI0203 21:24:57.275803 811 log.go:172] (0xc000108c60) Reply frame received for 5\nI0203 21:24:57.275898 811 log.go:172] (0xc000108c60) (0xc000b421e0) Create stream\nI0203 21:24:57.275927 811 log.go:172] (0xc000108c60) (0xc000b421e0) Stream added, broadcasting: 7\nI0203 21:24:57.278739 811 log.go:172] (0xc000108c60) Reply frame received for 7\nI0203 21:24:57.279564 811 log.go:172] (0xc000b420a0) (3) Writing data frame\nI0203 21:24:57.280092 811 log.go:172] (0xc000b420a0) (3) Writing data frame\nI0203 21:24:57.293822 811 log.go:172] (0xc000108c60) Data frame received for 5\nI0203 21:24:57.293903 811 log.go:172] (0xc0006c5d60) (5) Data frame handling\nI0203 21:24:57.293971 811 log.go:172] (0xc0006c5d60) (5) Data frame sent\nI0203 21:24:57.297710 811 log.go:172] (0xc000108c60) Data frame received for 5\nI0203 21:24:57.297749 811 log.go:172] (0xc0006c5d60) (5) Data frame handling\nI0203 21:24:57.297772 811 log.go:172] (0xc0006c5d60) (5) Data frame sent\nI0203 21:24:58.072623 811 log.go:172] (0xc000108c60) Data frame received for 1\nI0203 21:24:58.072822 811 log.go:172] (0xc000108c60) (0xc000b421e0) Stream removed, broadcasting: 7\nI0203 21:24:58.072967 811 log.go:172] (0xc0006c5cc0) (1) Data frame handling\nI0203 21:24:58.073002 811 log.go:172] (0xc0006c5cc0) (1) Data frame sent\nI0203 21:24:58.073056 811 log.go:172] (0xc000108c60) (0xc0006c5d60) Stream removed, broadcasting: 5\nI0203 21:24:58.073100 811 log.go:172] (0xc000108c60) (0xc000b420a0) Stream removed, broadcasting: 3\nI0203 21:24:58.073157 811 log.go:172] (0xc000108c60) (0xc0006c5cc0) Stream removed, broadcasting: 1\nI0203 21:24:58.073191 811 log.go:172] (0xc000108c60) Go away received\nI0203 21:24:58.074355 811 log.go:172] (0xc000108c60) (0xc0006c5cc0) Stream removed, broadcasting: 1\nI0203 21:24:58.074388 811 log.go:172] (0xc000108c60) (0xc000b420a0) Stream removed, broadcasting: 3\nI0203 21:24:58.074396 811 log.go:172] (0xc000108c60) (0xc0006c5d60) Stream removed, broadcasting: 5\nI0203 21:24:58.074408 811 log.go:172] (0xc000108c60) (0xc000b421e0) Stream removed, broadcasting: 7\n" Feb 3 21:24:58.116: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:25:00.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8512" for this suite. • [SLOW TEST:11.033 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":57,"skipped":768,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:25:00.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-d74c47dc-b156-4818-b909-6b7f4e3f22dc in namespace container-probe-3240 Feb 3 21:25:08.297: INFO: Started pod liveness-d74c47dc-b156-4818-b909-6b7f4e3f22dc in namespace container-probe-3240 STEP: checking the pod's current state and verifying that restartCount is present Feb 3 21:25:08.309: INFO: Initial restart count of pod liveness-d74c47dc-b156-4818-b909-6b7f4e3f22dc is 0 Feb 3 21:25:20.387: INFO: Restart count of pod container-probe-3240/liveness-d74c47dc-b156-4818-b909-6b7f4e3f22dc is now 1 (12.077522457s elapsed) Feb 3 21:25:40.633: INFO: Restart count of pod container-probe-3240/liveness-d74c47dc-b156-4818-b909-6b7f4e3f22dc is now 2 (32.323152925s elapsed) Feb 3 21:26:02.715: INFO: Restart count of pod container-probe-3240/liveness-d74c47dc-b156-4818-b909-6b7f4e3f22dc is now 3 (54.404959722s elapsed) Feb 3 21:26:20.814: INFO: Restart count of pod container-probe-3240/liveness-d74c47dc-b156-4818-b909-6b7f4e3f22dc is now 4 (1m12.504889967s elapsed) Feb 3 21:27:23.146: INFO: Restart count of pod container-probe-3240/liveness-d74c47dc-b156-4818-b909-6b7f4e3f22dc is now 5 (2m14.836341305s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:27:23.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3240" for this suite. • [SLOW TEST:143.092 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":786,"failed":0} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:27:23.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-8292 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8292 STEP: Deleting pre-stop pod Feb 3 21:27:44.439: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:27:44.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8292" for this suite. • [SLOW TEST:21.299 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":59,"skipped":786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:27:44.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 3 21:27:54.016: INFO: 0 pods remaining Feb 3 21:27:54.016: INFO: 0 pods has nil DeletionTimestamp Feb 3 21:27:54.016: INFO: STEP: Gathering metrics W0203 21:27:54.833912 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 3 21:27:54.834: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:27:54.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9943" for this suite. • [SLOW TEST:10.501 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":60,"skipped":823,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:27:55.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:28:02.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362080, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362080, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Feb 3 21:28:04.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362082, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362080, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:28:06.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362082, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362080, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:28:08.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362082, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362080, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:28:10.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362081, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362082, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362080, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:28:13.568: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:28:13.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:28:14.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-74" for this suite. STEP: Destroying namespace "webhook-74-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.716 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":61,"skipped":836,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:28:14.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3122.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3122.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 21:28:26.849: INFO: DNS probes using dns-test-f9c60705-ecd6-4322-b0ec-10c8eec8d586 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3122.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3122.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 21:28:41.026: INFO: File wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod dns-3122/dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 3 21:28:41.030: INFO: File jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod dns-3122/dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 3 21:28:41.030: INFO: Lookups using dns-3122/dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef failed for: [wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local] Feb 3 21:28:46.039: INFO: File wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod dns-3122/dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 3 21:28:46.046: INFO: File jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod dns-3122/dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 3 21:28:46.046: INFO: Lookups using dns-3122/dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef failed for: [wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local] Feb 3 21:28:51.047: INFO: File wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod dns-3122/dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 3 21:28:51.052: INFO: File jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod dns-3122/dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef contains '' instead of 'bar.example.com.' Feb 3 21:28:51.052: INFO: Lookups using dns-3122/dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef failed for: [wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local] Feb 3 21:28:56.045: INFO: DNS probes using dns-test-177c896a-f5bc-4b9d-ab0a-29cde86bdfef succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3122.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3122.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 21:29:08.429: INFO: DNS probes using dns-test-7cca1a3d-4a98-4ac0-bf82-01687aaa6ac0 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:08.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3122" for this suite. • [SLOW TEST:54.031 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":62,"skipped":854,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:08.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 3 21:29:17.757: INFO: Successfully updated pod "pod-update-activedeadlineseconds-31d71d62-036b-4b18-906a-5cfdb7082737" Feb 3 21:29:17.757: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-31d71d62-036b-4b18-906a-5cfdb7082737" in namespace "pods-5841" to be "terminated due to deadline exceeded" Feb 3 21:29:17.765: INFO: Pod "pod-update-activedeadlineseconds-31d71d62-036b-4b18-906a-5cfdb7082737": Phase="Running", Reason="", readiness=true. Elapsed: 7.918691ms Feb 3 21:29:19.774: INFO: Pod "pod-update-activedeadlineseconds-31d71d62-036b-4b18-906a-5cfdb7082737": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.016653339s Feb 3 21:29:19.774: INFO: Pod "pod-update-activedeadlineseconds-31d71d62-036b-4b18-906a-5cfdb7082737" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:19.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5841" for this suite. • [SLOW TEST:11.000 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":863,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:19.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3271.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3271.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3271.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3271.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3271.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3271.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 21:29:30.143: INFO: DNS probes using dns-3271/dns-test-13fedb40-9007-44d6-b244-f2c7bdd2a17b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:30.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3271" for this suite. • [SLOW TEST:10.513 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":64,"skipped":870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:30.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Feb 3 21:29:30.428: INFO: Waiting up to 5m0s for pod "downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8" in namespace "downward-api-1171" to be "success or failure" Feb 3 21:29:30.433: INFO: Pod "downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.996834ms Feb 3 21:29:32.440: INFO: Pod "downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011993134s Feb 3 21:29:34.447: INFO: Pod "downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019692206s Feb 3 21:29:36.453: INFO: Pod "downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025377199s Feb 3 21:29:38.462: INFO: Pod "downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034529199s Feb 3 21:29:40.472: INFO: Pod "downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044646661s STEP: Saw pod success Feb 3 21:29:40.473: INFO: Pod "downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8" satisfied condition "success or failure" Feb 3 21:29:40.476: INFO: Trying to get logs from node jerma-node pod downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8 container dapi-container: STEP: delete the pod Feb 3 21:29:40.538: INFO: Waiting for pod downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8 to disappear Feb 3 21:29:40.547: INFO: Pod downward-api-1143267a-f149-4b06-8b62-eb20a1e4cad8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:40.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1171" for this suite. • [SLOW TEST:10.262 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":918,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:40.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 3 21:29:40.748: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 21:29:40.760: INFO: Waiting for terminating namespaces to be deleted... Feb 3 21:29:40.763: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 3 21:29:40.771: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 3 21:29:40.771: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 21:29:40.771: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 3 21:29:40.771: INFO: Container weave ready: true, restart count 1 Feb 3 21:29:40.771: INFO: Container weave-npc ready: true, restart count 0 Feb 3 21:29:40.771: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 3 21:29:40.799: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 3 21:29:40.799: INFO: Container kube-controller-manager ready: true, restart count 3 Feb 3 21:29:40.799: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 3 21:29:40.799: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 21:29:40.799: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 3 21:29:40.799: INFO: Container weave ready: true, restart count 0 Feb 3 21:29:40.799: INFO: Container weave-npc ready: true, restart count 0 Feb 3 21:29:40.799: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 3 21:29:40.799: INFO: Container kube-scheduler ready: true, restart count 4 Feb 3 21:29:40.799: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 3 21:29:40.799: INFO: Container kube-apiserver ready: true, restart count 1 Feb 3 21:29:40.799: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 3 21:29:40.799: INFO: Container etcd ready: true, restart count 1 Feb 3 21:29:40.799: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 3 21:29:40.799: INFO: Container coredns ready: true, restart count 0 Feb 3 21:29:40.799: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 3 21:29:40.799: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Feb 3 21:29:41.099: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 3 21:29:41.100: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 3 21:29:41.100: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 3 21:29:41.100: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Feb 3 21:29:41.100: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Feb 3 21:29:41.100: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 3 21:29:41.100: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Feb 3 21:29:41.100: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 3 21:29:41.100: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Feb 3 21:29:41.100: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Feb 3 21:29:41.100: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Feb 3 21:29:41.110: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-eac4fb8c-6e63-4568-8f70-5c8b78aaeb6e.15f001bd27176667], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8420/filler-pod-eac4fb8c-6e63-4568-8f70-5c8b78aaeb6e to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-eac4fb8c-6e63-4568-8f70-5c8b78aaeb6e.15f001be05065405], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-eac4fb8c-6e63-4568-8f70-5c8b78aaeb6e.15f001bec24a5fcf], Reason = [Created], Message = [Created container filler-pod-eac4fb8c-6e63-4568-8f70-5c8b78aaeb6e] STEP: Considering event: Type = [Normal], Name = [filler-pod-eac4fb8c-6e63-4568-8f70-5c8b78aaeb6e.15f001bef4048376], Reason = [Started], Message = [Started container filler-pod-eac4fb8c-6e63-4568-8f70-5c8b78aaeb6e] STEP: Considering event: Type = [Normal], Name = [filler-pod-fb89cd18-6adc-4002-b63a-9e763ed7ddec.15f001bd29124d75], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8420/filler-pod-fb89cd18-6adc-4002-b63a-9e763ed7ddec to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-fb89cd18-6adc-4002-b63a-9e763ed7ddec.15f001be570935d1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-fb89cd18-6adc-4002-b63a-9e763ed7ddec.15f001bf316137ed], Reason = [Created], Message = [Created container filler-pod-fb89cd18-6adc-4002-b63a-9e763ed7ddec] STEP: Considering event: Type = [Normal], Name = [filler-pod-fb89cd18-6adc-4002-b63a-9e763ed7ddec.15f001bf51b2dee2], Reason = [Started], Message = [Started container filler-pod-fb89cd18-6adc-4002-b63a-9e763ed7ddec] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f001bf7ec595dd], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f001bf870f40b3], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:52.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8420" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:11.803 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":66,"skipped":919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:52.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 3 21:29:52.509: INFO: Created pod &Pod{ObjectMeta:{dns-3377 dns-3377 /api/v1/namespaces/dns-3377/pods/dns-3377 ed75bdda-1b4d-4dba-90ba-6539b9e08c40 6200340 0 2020-02-03 21:29:52 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s5g9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s5g9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s5g9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Feb 3 21:30:00.947: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3377 PodName:dns-3377 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:30:00.947: INFO: >>> kubeConfig: /root/.kube/config I0203 21:30:01.147866 8 log.go:172] (0xc001474fd0) (0xc0024539a0) Create stream I0203 21:30:01.148033 8 log.go:172] (0xc001474fd0) (0xc0024539a0) Stream added, broadcasting: 1 I0203 21:30:01.155101 8 log.go:172] (0xc001474fd0) Reply frame received for 1 I0203 21:30:01.155170 8 log.go:172] (0xc001474fd0) (0xc0029d80a0) Create stream I0203 21:30:01.155192 8 log.go:172] (0xc001474fd0) (0xc0029d80a0) Stream added, broadcasting: 3 I0203 21:30:01.157351 8 log.go:172] (0xc001474fd0) Reply frame received for 3 I0203 21:30:01.157383 8 log.go:172] (0xc001474fd0) (0xc002453a40) Create stream I0203 21:30:01.157397 8 log.go:172] (0xc001474fd0) (0xc002453a40) Stream added, broadcasting: 5 I0203 21:30:01.158989 8 log.go:172] (0xc001474fd0) Reply frame received for 5 I0203 21:30:01.340831 8 log.go:172] (0xc001474fd0) Data frame received for 3 I0203 21:30:01.340940 8 log.go:172] (0xc0029d80a0) (3) Data frame handling I0203 21:30:01.340970 8 log.go:172] (0xc0029d80a0) (3) Data frame sent I0203 21:30:01.486133 8 log.go:172] (0xc001474fd0) (0xc002453a40) Stream removed, broadcasting: 5 I0203 21:30:01.486389 8 log.go:172] (0xc001474fd0) (0xc0029d80a0) Stream removed, broadcasting: 3 I0203 21:30:01.486543 8 log.go:172] (0xc001474fd0) Data frame received for 1 I0203 21:30:01.486717 8 log.go:172] (0xc0024539a0) (1) Data frame handling I0203 21:30:01.486782 8 log.go:172] (0xc0024539a0) (1) Data frame sent I0203 21:30:01.486827 8 log.go:172] (0xc001474fd0) (0xc0024539a0) Stream removed, broadcasting: 1 I0203 21:30:01.486956 8 log.go:172] (0xc001474fd0) Go away received I0203 21:30:01.487682 8 log.go:172] (0xc001474fd0) (0xc0024539a0) Stream removed, broadcasting: 1 I0203 21:30:01.487765 8 log.go:172] (0xc001474fd0) (0xc0029d80a0) Stream removed, broadcasting: 3 I0203 21:30:01.487789 8 log.go:172] (0xc001474fd0) (0xc002453a40) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Feb 3 21:30:01.488: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3377 PodName:dns-3377 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:30:01.488: INFO: >>> kubeConfig: /root/.kube/config I0203 21:30:01.535687 8 log.go:172] (0xc0022f2370) (0xc0029d8500) Create stream I0203 21:30:01.535834 8 log.go:172] (0xc0022f2370) (0xc0029d8500) Stream added, broadcasting: 1 I0203 21:30:01.540003 8 log.go:172] (0xc0022f2370) Reply frame received for 1 I0203 21:30:01.540128 8 log.go:172] (0xc0022f2370) (0xc0029d85a0) Create stream I0203 21:30:01.540140 8 log.go:172] (0xc0022f2370) (0xc0029d85a0) Stream added, broadcasting: 3 I0203 21:30:01.542772 8 log.go:172] (0xc0022f2370) Reply frame received for 3 I0203 21:30:01.542802 8 log.go:172] (0xc0022f2370) (0xc0017e63c0) Create stream I0203 21:30:01.542809 8 log.go:172] (0xc0022f2370) (0xc0017e63c0) Stream added, broadcasting: 5 I0203 21:30:01.543597 8 log.go:172] (0xc0022f2370) Reply frame received for 5 I0203 21:30:01.635501 8 log.go:172] (0xc0022f2370) Data frame received for 3 I0203 21:30:01.635555 8 log.go:172] (0xc0029d85a0) (3) Data frame handling I0203 21:30:01.635575 8 log.go:172] (0xc0029d85a0) (3) Data frame sent I0203 21:30:01.751433 8 log.go:172] (0xc0022f2370) Data frame received for 1 I0203 21:30:01.751516 8 log.go:172] (0xc0029d8500) (1) Data frame handling I0203 21:30:01.751537 8 log.go:172] (0xc0029d8500) (1) Data frame sent I0203 21:30:01.751560 8 log.go:172] (0xc0022f2370) (0xc0029d8500) Stream removed, broadcasting: 1 I0203 21:30:01.751743 8 log.go:172] (0xc0022f2370) (0xc0029d85a0) Stream removed, broadcasting: 3 I0203 21:30:01.752083 8 log.go:172] (0xc0022f2370) (0xc0017e63c0) Stream removed, broadcasting: 5 I0203 21:30:01.752171 8 log.go:172] (0xc0022f2370) (0xc0029d8500) Stream removed, broadcasting: 1 I0203 21:30:01.752225 8 log.go:172] (0xc0022f2370) (0xc0029d85a0) Stream removed, broadcasting: 3 I0203 21:30:01.752235 8 log.go:172] (0xc0022f2370) (0xc0017e63c0) Stream removed, broadcasting: 5 I0203 21:30:01.752346 8 log.go:172] (0xc0022f2370) Go away received Feb 3 21:30:01.752: INFO: Deleting pod dns-3377... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:30:03.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3377" for this suite. • [SLOW TEST:11.113 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":67,"skipped":981,"failed":0} SS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:30:03.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:30:03.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4271" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":68,"skipped":983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:30:03.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:30:57.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-104" for this suite. • [SLOW TEST:53.370 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1033,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:30:57.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 3 21:30:57.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8566' Feb 3 21:30:57.586: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 3 21:30:57.586: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Feb 3 21:30:57.636: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-n25zh] Feb 3 21:30:57.636: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-n25zh" in namespace "kubectl-8566" to be "running and ready" Feb 3 21:30:57.717: INFO: Pod "e2e-test-httpd-rc-n25zh": Phase="Pending", Reason="", readiness=false. Elapsed: 81.448924ms Feb 3 21:30:59.723: INFO: Pod "e2e-test-httpd-rc-n25zh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08719826s Feb 3 21:31:01.732: INFO: Pod "e2e-test-httpd-rc-n25zh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095930165s Feb 3 21:31:03.741: INFO: Pod "e2e-test-httpd-rc-n25zh": Phase="Running", Reason="", readiness=true. Elapsed: 6.1050881s Feb 3 21:31:03.741: INFO: Pod "e2e-test-httpd-rc-n25zh" satisfied condition "running and ready" Feb 3 21:31:03.741: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-n25zh] Feb 3 21:31:03.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8566' Feb 3 21:31:03.994: INFO: stderr: "" Feb 3 21:31:03.994: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Mon Feb 03 21:31:02.349835 2020] [mpm_event:notice] [pid 1:tid 140394286680936] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Feb 03 21:31:02.349901 2020] [core:notice] [pid 1:tid 140394286680936] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Feb 3 21:31:03.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8566' Feb 3 21:31:04.175: INFO: stderr: "" Feb 3 21:31:04.175: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:31:04.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8566" for this suite. • [SLOW TEST:6.967 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":70,"skipped":1034,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:31:04.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-e2fb4dfd-45a2-4edc-b21a-275abd359ea8 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:31:14.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6594" for this suite. • [SLOW TEST:10.196 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1045,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:31:14.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:31:14.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7ef6981-b1a5-4a0b-98e5-bdd827e12be9" in namespace "projected-3056" to be "success or failure" Feb 3 21:31:14.555: INFO: Pod "downwardapi-volume-b7ef6981-b1a5-4a0b-98e5-bdd827e12be9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.996458ms Feb 3 21:31:16.574: INFO: Pod "downwardapi-volume-b7ef6981-b1a5-4a0b-98e5-bdd827e12be9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038187151s Feb 3 21:31:18.584: INFO: Pod "downwardapi-volume-b7ef6981-b1a5-4a0b-98e5-bdd827e12be9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048563617s Feb 3 21:31:20.593: INFO: Pod "downwardapi-volume-b7ef6981-b1a5-4a0b-98e5-bdd827e12be9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057708627s STEP: Saw pod success Feb 3 21:31:20.594: INFO: Pod "downwardapi-volume-b7ef6981-b1a5-4a0b-98e5-bdd827e12be9" satisfied condition "success or failure" Feb 3 21:31:20.598: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b7ef6981-b1a5-4a0b-98e5-bdd827e12be9 container client-container: STEP: delete the pod Feb 3 21:31:20.688: INFO: Waiting for pod downwardapi-volume-b7ef6981-b1a5-4a0b-98e5-bdd827e12be9 to disappear Feb 3 21:31:20.719: INFO: Pod downwardapi-volume-b7ef6981-b1a5-4a0b-98e5-bdd827e12be9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:31:20.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3056" for this suite. • [SLOW TEST:6.348 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1048,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:31:20.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 3 21:31:22.069: INFO: Pod name wrapped-volume-race-58306eaa-41bf-45e5-b65c-60b339e55b45: Found 0 pods out of 5 Feb 3 21:31:27.079: INFO: Pod name wrapped-volume-race-58306eaa-41bf-45e5-b65c-60b339e55b45: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-58306eaa-41bf-45e5-b65c-60b339e55b45 in namespace emptydir-wrapper-2433, will wait for the garbage collector to delete the pods Feb 3 21:31:55.172: INFO: Deleting ReplicationController wrapped-volume-race-58306eaa-41bf-45e5-b65c-60b339e55b45 took: 9.352328ms Feb 3 21:31:55.673: INFO: Terminating ReplicationController wrapped-volume-race-58306eaa-41bf-45e5-b65c-60b339e55b45 pods took: 500.506816ms STEP: Creating RC which spawns configmap-volume pods Feb 3 21:32:13.349: INFO: Pod name wrapped-volume-race-22810187-788d-4c94-b447-3fb24b922c37: Found 0 pods out of 5 Feb 3 21:32:18.368: INFO: Pod name wrapped-volume-race-22810187-788d-4c94-b447-3fb24b922c37: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-22810187-788d-4c94-b447-3fb24b922c37 in namespace emptydir-wrapper-2433, will wait for the garbage collector to delete the pods Feb 3 21:32:48.512: INFO: Deleting ReplicationController wrapped-volume-race-22810187-788d-4c94-b447-3fb24b922c37 took: 14.412314ms Feb 3 21:32:48.912: INFO: Terminating ReplicationController wrapped-volume-race-22810187-788d-4c94-b447-3fb24b922c37 pods took: 400.711839ms STEP: Creating RC which spawns configmap-volume pods Feb 3 21:33:00.086: INFO: Pod name wrapped-volume-race-8ab56f5a-715c-4ac4-9307-a12d9e9bde15: Found 0 pods out of 5 Feb 3 21:33:05.139: INFO: Pod name wrapped-volume-race-8ab56f5a-715c-4ac4-9307-a12d9e9bde15: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8ab56f5a-715c-4ac4-9307-a12d9e9bde15 in namespace emptydir-wrapper-2433, will wait for the garbage collector to delete the pods Feb 3 21:33:33.276: INFO: Deleting ReplicationController wrapped-volume-race-8ab56f5a-715c-4ac4-9307-a12d9e9bde15 took: 12.344845ms Feb 3 21:33:33.778: INFO: Terminating ReplicationController wrapped-volume-race-8ab56f5a-715c-4ac4-9307-a12d9e9bde15 pods took: 501.415148ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:33:54.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2433" for this suite. • [SLOW TEST:153.600 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":73,"skipped":1061,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:33:54.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 3 21:33:54.519: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:34:09.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-301" for this suite. • [SLOW TEST:15.323 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":74,"skipped":1074,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:34:09.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:34:14.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5355" for this suite. • [SLOW TEST:5.185 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":75,"skipped":1100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:34:14.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:34:15.524: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:34:17.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:34:19.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:34:21.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362455, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:34:24.618: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:34:24.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3046" for this suite. STEP: Destroying namespace "webhook-3046-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.220 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":76,"skipped":1124,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:34:25.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 3 21:34:25.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3258 /api/v1/namespaces/watch-3258/configmaps/e2e-watch-test-label-changed 15fb121e-34f1-4842-a86d-5d8b382d54d0 6202166 0 2020-02-03 21:34:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 3 21:34:25.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3258 /api/v1/namespaces/watch-3258/configmaps/e2e-watch-test-label-changed 15fb121e-34f1-4842-a86d-5d8b382d54d0 6202167 0 2020-02-03 21:34:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 3 21:34:25.319: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3258 /api/v1/namespaces/watch-3258/configmaps/e2e-watch-test-label-changed 15fb121e-34f1-4842-a86d-5d8b382d54d0 6202169 0 2020-02-03 21:34:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 3 21:34:35.413: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3258 /api/v1/namespaces/watch-3258/configmaps/e2e-watch-test-label-changed 15fb121e-34f1-4842-a86d-5d8b382d54d0 6202214 0 2020-02-03 21:34:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 3 21:34:35.413: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3258 /api/v1/namespaces/watch-3258/configmaps/e2e-watch-test-label-changed 15fb121e-34f1-4842-a86d-5d8b382d54d0 6202215 0 2020-02-03 21:34:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 3 21:34:35.413: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3258 /api/v1/namespaces/watch-3258/configmaps/e2e-watch-test-label-changed 15fb121e-34f1-4842-a86d-5d8b382d54d0 6202216 0 2020-02-03 21:34:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:34:35.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3258" for this suite. • [SLOW TEST:10.358 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":77,"skipped":1132,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:34:35.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1041/configmap-test-3501e97a-0e49-4a5c-8f09-e4409332a7f8 STEP: Creating a pod to test consume configMaps Feb 3 21:34:35.578: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2" in namespace "configmap-1041" to be "success or failure" Feb 3 21:34:35.673: INFO: Pod "pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2": Phase="Pending", Reason="", readiness=false. Elapsed: 95.068625ms Feb 3 21:34:37.682: INFO: Pod "pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104040208s Feb 3 21:34:39.689: INFO: Pod "pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111130519s Feb 3 21:34:41.697: INFO: Pod "pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119143014s Feb 3 21:34:43.714: INFO: Pod "pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.136230185s STEP: Saw pod success Feb 3 21:34:43.714: INFO: Pod "pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2" satisfied condition "success or failure" Feb 3 21:34:43.718: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2 container env-test: STEP: delete the pod Feb 3 21:34:43.811: INFO: Waiting for pod pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2 to disappear Feb 3 21:34:43.816: INFO: Pod pod-configmaps-a9fbc185-7cce-4935-a173-4330bdd7aac2 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:34:43.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1041" for this suite. • [SLOW TEST:8.403 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:34:43.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 3 21:34:44.028: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-a 44fe1193-54b2-4077-9412-325595bf4c8a 6202260 0 2020-02-03 21:34:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 3 21:34:44.029: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-a 44fe1193-54b2-4077-9412-325595bf4c8a 6202260 0 2020-02-03 21:34:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 3 21:34:54.039: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-a 44fe1193-54b2-4077-9412-325595bf4c8a 6202293 0 2020-02-03 21:34:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 3 21:34:54.039: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-a 44fe1193-54b2-4077-9412-325595bf4c8a 6202293 0 2020-02-03 21:34:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 3 21:35:04.047: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-a 44fe1193-54b2-4077-9412-325595bf4c8a 6202317 0 2020-02-03 21:34:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 3 21:35:04.048: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-a 44fe1193-54b2-4077-9412-325595bf4c8a 6202317 0 2020-02-03 21:34:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 3 21:35:14.067: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-a 44fe1193-54b2-4077-9412-325595bf4c8a 6202341 0 2020-02-03 21:34:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 3 21:35:14.068: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-a 44fe1193-54b2-4077-9412-325595bf4c8a 6202341 0 2020-02-03 21:34:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 3 21:35:24.078: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-b 7a52b0a0-cf28-4b2e-af26-4a4dea11b649 6202365 0 2020-02-03 21:35:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 3 21:35:24.078: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-b 7a52b0a0-cf28-4b2e-af26-4a4dea11b649 6202365 0 2020-02-03 21:35:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 3 21:35:34.087: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-b 7a52b0a0-cf28-4b2e-af26-4a4dea11b649 6202385 0 2020-02-03 21:35:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 3 21:35:34.088: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1135 /api/v1/namespaces/watch-1135/configmaps/e2e-watch-test-configmap-b 7a52b0a0-cf28-4b2e-af26-4a4dea11b649 6202385 0 2020-02-03 21:35:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:35:44.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1135" for this suite. • [SLOW TEST:60.276 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":79,"skipped":1191,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:35:44.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2989891a-05f0-4e18-ae2c-1f38e1a1c2b9 STEP: Creating a pod to test consume configMaps Feb 3 21:35:44.235: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-09175386-5b8d-4428-b01c-a80903badf01" in namespace "projected-5031" to be "success or failure" Feb 3 21:35:44.273: INFO: Pod "pod-projected-configmaps-09175386-5b8d-4428-b01c-a80903badf01": Phase="Pending", Reason="", readiness=false. Elapsed: 37.528431ms Feb 3 21:35:46.281: INFO: Pod "pod-projected-configmaps-09175386-5b8d-4428-b01c-a80903badf01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045802569s Feb 3 21:35:48.289: INFO: Pod "pod-projected-configmaps-09175386-5b8d-4428-b01c-a80903badf01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053805735s Feb 3 21:35:50.299: INFO: Pod "pod-projected-configmaps-09175386-5b8d-4428-b01c-a80903badf01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06372756s STEP: Saw pod success Feb 3 21:35:50.299: INFO: Pod "pod-projected-configmaps-09175386-5b8d-4428-b01c-a80903badf01" satisfied condition "success or failure" Feb 3 21:35:50.306: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-09175386-5b8d-4428-b01c-a80903badf01 container projected-configmap-volume-test: STEP: delete the pod Feb 3 21:35:50.386: INFO: Waiting for pod pod-projected-configmaps-09175386-5b8d-4428-b01c-a80903badf01 to disappear Feb 3 21:35:50.415: INFO: Pod pod-projected-configmaps-09175386-5b8d-4428-b01c-a80903badf01 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:35:50.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5031" for this suite. • [SLOW TEST:6.400 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1207,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:35:50.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-eb0252dc-71cd-4d32-b74b-d3accf0a177d STEP: Creating configMap with name cm-test-opt-upd-7d6ca3b4-ba72-4207-a9ae-97a774938a91 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-eb0252dc-71cd-4d32-b74b-d3accf0a177d STEP: Updating configmap cm-test-opt-upd-7d6ca3b4-ba72-4207-a9ae-97a774938a91 STEP: Creating configMap with name cm-test-opt-create-53348a90-7e8f-4429-b89f-f2b02a6b11b8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:37:32.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-321" for this suite. • [SLOW TEST:102.199 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1215,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:37:32.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-085f79f7-81a9-4e15-8d93-d961f564d4d8 STEP: Creating a pod to test consume secrets Feb 3 21:37:32.879: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1" in namespace "projected-5645" to be "success or failure" Feb 3 21:37:32.901: INFO: Pod "pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.801747ms Feb 3 21:37:34.908: INFO: Pod "pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029552582s Feb 3 21:37:36.917: INFO: Pod "pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038245774s Feb 3 21:37:38.926: INFO: Pod "pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046740458s Feb 3 21:37:40.933: INFO: Pod "pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053717781s STEP: Saw pod success Feb 3 21:37:40.933: INFO: Pod "pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1" satisfied condition "success or failure" Feb 3 21:37:40.940: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1 container projected-secret-volume-test: STEP: delete the pod Feb 3 21:37:41.200: INFO: Waiting for pod pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1 to disappear Feb 3 21:37:41.212: INFO: Pod pod-projected-secrets-2ad033d2-e8fc-424c-9462-bf89e71f6ef1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:37:41.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5645" for this suite. • [SLOW TEST:8.518 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:37:41.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 3 21:37:41.433: INFO: Waiting up to 5m0s for pod "pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7" in namespace "emptydir-6325" to be "success or failure" Feb 3 21:37:41.573: INFO: Pod "pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 139.699869ms Feb 3 21:37:43.614: INFO: Pod "pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180861138s Feb 3 21:37:45.622: INFO: Pod "pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188219025s Feb 3 21:37:47.633: INFO: Pod "pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199327281s Feb 3 21:37:49.640: INFO: Pod "pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.206679757s STEP: Saw pod success Feb 3 21:37:49.640: INFO: Pod "pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7" satisfied condition "success or failure" Feb 3 21:37:49.644: INFO: Trying to get logs from node jerma-node pod pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7 container test-container: STEP: delete the pod Feb 3 21:37:49.682: INFO: Waiting for pod pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7 to disappear Feb 3 21:37:49.701: INFO: Pod pod-feffdc9e-9162-4694-bcb5-6e5ee23ba9b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:37:49.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6325" for this suite. • [SLOW TEST:8.489 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:37:49.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:37:49.959: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:37:58.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7561" for this suite. • [SLOW TEST:8.757 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1290,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:37:58.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-5ca67aa9-5271-45d7-a373-9ebe8da43c65 STEP: Creating a pod to test consume configMaps Feb 3 21:37:58.789: INFO: Waiting up to 5m0s for pod "pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d" in namespace "configmap-5376" to be "success or failure" Feb 3 21:37:58.805: INFO: Pod "pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.475055ms Feb 3 21:38:00.814: INFO: Pod "pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024229682s Feb 3 21:38:02.823: INFO: Pod "pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033735674s Feb 3 21:38:04.829: INFO: Pod "pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039319599s Feb 3 21:38:06.837: INFO: Pod "pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047369846s STEP: Saw pod success Feb 3 21:38:06.837: INFO: Pod "pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d" satisfied condition "success or failure" Feb 3 21:38:06.840: INFO: Trying to get logs from node jerma-node pod pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d container configmap-volume-test: STEP: delete the pod Feb 3 21:38:07.148: INFO: Waiting for pod pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d to disappear Feb 3 21:38:07.158: INFO: Pod pod-configmaps-011f06f5-2eb1-4cda-adb9-e51f657e354d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:38:07.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5376" for this suite. • [SLOW TEST:8.688 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1312,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:38:07.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-a7c2299d-3a76-46af-9c46-fefa83e8464f STEP: Creating a pod to test consume configMaps Feb 3 21:38:07.341: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8" in namespace "projected-8477" to be "success or failure" Feb 3 21:38:07.366: INFO: Pod "pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.887426ms Feb 3 21:38:09.372: INFO: Pod "pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030569657s Feb 3 21:38:11.377: INFO: Pod "pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035921735s Feb 3 21:38:13.384: INFO: Pod "pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042840893s Feb 3 21:38:15.396: INFO: Pod "pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054094159s STEP: Saw pod success Feb 3 21:38:15.396: INFO: Pod "pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8" satisfied condition "success or failure" Feb 3 21:38:15.401: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8 container projected-configmap-volume-test: STEP: delete the pod Feb 3 21:38:15.445: INFO: Waiting for pod pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8 to disappear Feb 3 21:38:15.464: INFO: Pod pod-projected-configmaps-f99969b6-976a-45dd-911b-41330066ebe8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:38:15.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8477" for this suite. • [SLOW TEST:8.303 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1315,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:38:15.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9856/configmap-test-3d1c51b7-f274-4f7b-afc9-a41bb6bb84de STEP: Creating a pod to test consume configMaps Feb 3 21:38:15.689: INFO: Waiting up to 5m0s for pod "pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629" in namespace "configmap-9856" to be "success or failure" Feb 3 21:38:15.720: INFO: Pod "pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629": Phase="Pending", Reason="", readiness=false. Elapsed: 31.314238ms Feb 3 21:38:17.728: INFO: Pod "pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03870345s Feb 3 21:38:19.739: INFO: Pod "pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050523272s Feb 3 21:38:21.748: INFO: Pod "pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058878756s Feb 3 21:38:23.796: INFO: Pod "pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107086954s STEP: Saw pod success Feb 3 21:38:23.796: INFO: Pod "pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629" satisfied condition "success or failure" Feb 3 21:38:23.799: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629 container env-test: STEP: delete the pod Feb 3 21:38:23.887: INFO: Waiting for pod pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629 to disappear Feb 3 21:38:23.981: INFO: Pod pod-configmaps-e92fa643-ea27-4364-ba2b-40ae76f65629 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:38:23.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9856" for this suite. • [SLOW TEST:8.522 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1327,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:38:24.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:38:24.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0" in namespace "projected-9365" to be "success or failure" Feb 3 21:38:24.178: INFO: Pod "downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.5964ms Feb 3 21:38:26.187: INFO: Pod "downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027071784s Feb 3 21:38:28.194: INFO: Pod "downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033718591s Feb 3 21:38:30.201: INFO: Pod "downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041108057s Feb 3 21:38:32.210: INFO: Pod "downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049668048s STEP: Saw pod success Feb 3 21:38:32.210: INFO: Pod "downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0" satisfied condition "success or failure" Feb 3 21:38:32.216: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0 container client-container: STEP: delete the pod Feb 3 21:38:32.550: INFO: Waiting for pod downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0 to disappear Feb 3 21:38:32.611: INFO: Pod downwardapi-volume-6c36df95-6cab-44d1-844b-a2f9edcc88c0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:38:32.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9365" for this suite. • [SLOW TEST:8.626 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1331,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:38:32.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-4082d30f-0007-420d-a1fc-9a2aa5dfa201 STEP: Creating a pod to test consume secrets Feb 3 21:38:32.827: INFO: Waiting up to 5m0s for pod "pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1" in namespace "secrets-4196" to be "success or failure" Feb 3 21:38:32.842: INFO: Pod "pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.819791ms Feb 3 21:38:34.851: INFO: Pod "pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023689528s Feb 3 21:38:36.860: INFO: Pod "pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032814062s Feb 3 21:38:38.879: INFO: Pod "pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051580706s Feb 3 21:38:40.896: INFO: Pod "pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1": Phase="Running", Reason="", readiness=true. Elapsed: 8.069210088s Feb 3 21:38:42.902: INFO: Pod "pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075007714s STEP: Saw pod success Feb 3 21:38:42.902: INFO: Pod "pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1" satisfied condition "success or failure" Feb 3 21:38:42.907: INFO: Trying to get logs from node jerma-node pod pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1 container secret-volume-test: STEP: delete the pod Feb 3 21:38:43.048: INFO: Waiting for pod pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1 to disappear Feb 3 21:38:43.062: INFO: Pod pod-secrets-856e9b1f-53f8-418f-abda-c20fa6ef14b1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:38:43.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4196" for this suite. • [SLOW TEST:10.443 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1332,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:38:43.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 3 21:38:44.131: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 3 21:38:46.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362724, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362723, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:38:48.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362724, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362723, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:38:50.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362724, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362723, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:38:53.196: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:38:53.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:38:54.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2786" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.736 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":90,"skipped":1338,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:38:54.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Feb 3 21:38:54.970: INFO: Waiting up to 5m0s for pod "var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27" in namespace "var-expansion-2454" to be "success or failure" Feb 3 21:38:54.974: INFO: Pod "var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377631ms Feb 3 21:38:56.982: INFO: Pod "var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012608052s Feb 3 21:38:58.994: INFO: Pod "var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024401067s Feb 3 21:39:01.003: INFO: Pod "var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03336485s Feb 3 21:39:03.035: INFO: Pod "var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065060089s Feb 3 21:39:05.046: INFO: Pod "var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07586158s STEP: Saw pod success Feb 3 21:39:05.046: INFO: Pod "var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27" satisfied condition "success or failure" Feb 3 21:39:05.050: INFO: Trying to get logs from node jerma-node pod var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27 container dapi-container: STEP: delete the pod Feb 3 21:39:05.084: INFO: Waiting for pod var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27 to disappear Feb 3 21:39:05.091: INFO: Pod var-expansion-89e89951-db05-4cd6-87c6-3b6676efad27 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:39:05.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2454" for this suite. • [SLOW TEST:10.293 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:39:05.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3721 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-3721 Feb 3 21:39:05.298: INFO: Found 0 stateful pods, waiting for 1 Feb 3 21:39:15.306: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Feb 3 21:39:15.429: INFO: Deleting all statefulset in ns statefulset-3721 Feb 3 21:39:15.455: INFO: Scaling statefulset ss to 0 Feb 3 21:39:35.653: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 21:39:35.657: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:39:35.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3721" for this suite. • [SLOW TEST:30.609 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":92,"skipped":1372,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:39:35.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 3 21:39:49.959: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 21:39:49.986: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 21:39:51.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 21:39:52.157: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 21:39:53.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 21:39:53.993: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 21:39:55.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 21:39:55.993: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 21:39:57.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 21:39:57.992: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 21:39:59.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 21:39:59.993: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 21:40:01.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 21:40:01.992: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 21:40:03.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 21:40:03.995: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:40:03.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5850" for this suite. • [SLOW TEST:28.297 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1382,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:40:04.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-c25d547b-fd86-48c7-9417-8ee9a2c707d1 STEP: Creating secret with name s-test-opt-upd-5a71f914-adbc-4302-8d51-c7f43b5c46a1 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c25d547b-fd86-48c7-9417-8ee9a2c707d1 STEP: Updating secret s-test-opt-upd-5a71f914-adbc-4302-8d51-c7f43b5c46a1 STEP: Creating secret with name s-test-opt-create-5344b116-43a9-4d18-9300-79d292803de6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:41:23.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6943" for this suite. • [SLOW TEST:79.222 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:41:23.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:41:23.366: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-67b39dc0-ceb5-483f-b16a-df926f5c6930" in namespace "security-context-test-2205" to be "success or failure" Feb 3 21:41:23.459: INFO: Pod "busybox-readonly-false-67b39dc0-ceb5-483f-b16a-df926f5c6930": Phase="Pending", Reason="", readiness=false. Elapsed: 93.198823ms Feb 3 21:41:25.469: INFO: Pod "busybox-readonly-false-67b39dc0-ceb5-483f-b16a-df926f5c6930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103436562s Feb 3 21:41:27.477: INFO: Pod "busybox-readonly-false-67b39dc0-ceb5-483f-b16a-df926f5c6930": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110607778s Feb 3 21:41:29.483: INFO: Pod "busybox-readonly-false-67b39dc0-ceb5-483f-b16a-df926f5c6930": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117190065s Feb 3 21:41:31.491: INFO: Pod "busybox-readonly-false-67b39dc0-ceb5-483f-b16a-df926f5c6930": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12494716s Feb 3 21:41:33.497: INFO: Pod "busybox-readonly-false-67b39dc0-ceb5-483f-b16a-df926f5c6930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131031659s Feb 3 21:41:33.497: INFO: Pod "busybox-readonly-false-67b39dc0-ceb5-483f-b16a-df926f5c6930" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:41:33.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2205" for this suite. • [SLOW TEST:10.276 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1434,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:41:33.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-02a591f4-b51a-4917-86fa-d191185fdf3b STEP: Creating a pod to test consume configMaps Feb 3 21:41:33.877: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21" in namespace "projected-6952" to be "success or failure" Feb 3 21:41:33.892: INFO: Pod "pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21": Phase="Pending", Reason="", readiness=false. Elapsed: 15.448695ms Feb 3 21:41:35.905: INFO: Pod "pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027787551s Feb 3 21:41:37.912: INFO: Pod "pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034675075s Feb 3 21:41:39.919: INFO: Pod "pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042385234s Feb 3 21:41:41.926: INFO: Pod "pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049157076s STEP: Saw pod success Feb 3 21:41:41.926: INFO: Pod "pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21" satisfied condition "success or failure" Feb 3 21:41:41.931: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21 container projected-configmap-volume-test: STEP: delete the pod Feb 3 21:41:41.994: INFO: Waiting for pod pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21 to disappear Feb 3 21:41:42.005: INFO: Pod pod-projected-configmaps-7c2879dd-d22b-404b-b32f-eba498ac3e21 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:41:42.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6952" for this suite. • [SLOW TEST:8.537 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1436,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:41:42.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-f86c9623-e3bf-4b4f-ab36-35422eadc21c STEP: Creating a pod to test consume secrets Feb 3 21:41:42.137: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4" in namespace "projected-7860" to be "success or failure" Feb 3 21:41:42.180: INFO: Pod "pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.528627ms Feb 3 21:41:44.187: INFO: Pod "pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049546485s Feb 3 21:41:46.194: INFO: Pod "pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05690849s Feb 3 21:41:48.201: INFO: Pod "pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064356482s Feb 3 21:41:50.208: INFO: Pod "pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070519817s STEP: Saw pod success Feb 3 21:41:50.208: INFO: Pod "pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4" satisfied condition "success or failure" Feb 3 21:41:50.212: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4 container projected-secret-volume-test: STEP: delete the pod Feb 3 21:41:50.269: INFO: Waiting for pod pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4 to disappear Feb 3 21:41:50.277: INFO: Pod pod-projected-secrets-4bb6b2e9-eb15-4a07-bfe8-7b8b5adbcfe4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:41:50.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7860" for this suite. • [SLOW TEST:8.246 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:41:50.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[] Feb 3 21:41:50.416: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[] (13.419358ms elapsed) STEP: Creating pod pod1 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[pod1:[80]] Feb 3 21:41:54.656: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.230795861s elapsed, will retry) Feb 3 21:41:56.793: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[pod1:[80]] (6.367142071s elapsed) STEP: Creating pod pod2 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[pod1:[80] pod2:[80]] Feb 3 21:42:01.213: INFO: Unexpected endpoints: found map[4c00cc53-a1a5-4173-a6fd-6c985cd9cab0:[80]], expected map[pod1:[80] pod2:[80]] (4.411541759s elapsed, will retry) Feb 3 21:42:04.734: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[pod1:[80] pod2:[80]] (7.933444778s elapsed) STEP: Deleting pod pod1 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[pod2:[80]] Feb 3 21:42:04.805: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[pod2:[80]] (48.920796ms elapsed) STEP: Deleting pod pod2 in namespace services-4218 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4218 to expose endpoints map[] Feb 3 21:42:05.921: INFO: successfully validated that service endpoint-test2 in namespace services-4218 exposes endpoints map[] (1.105941378s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:42:05.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4218" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:15.779 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":98,"skipped":1469,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:42:06.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:42:06.152: INFO: Creating deployment "test-recreate-deployment" Feb 3 21:42:06.157: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 3 21:42:06.233: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 3 21:42:08.259: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 3 21:42:08.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:42:10.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:42:12.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:42:14.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716362926, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:42:16.271: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 3 21:42:16.283: INFO: Updating deployment test-recreate-deployment Feb 3 21:42:16.283: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 3 21:42:16.698: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6299 /apis/apps/v1/namespaces/deployment-6299/deployments/test-recreate-deployment 95146add-6db7-4437-8992-9c12fca78c2c 6204000 2 2020-02-03 21:42:06 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002823f98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-03 21:42:16 +0000 UTC,LastTransitionTime:2020-02-03 21:42:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-03 21:42:16 +0000 UTC,LastTransitionTime:2020-02-03 21:42:06 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Feb 3 21:42:16.780: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-6299 /apis/apps/v1/namespaces/deployment-6299/replicasets/test-recreate-deployment-5f94c574ff 2e27b75f-a566-4270-9e52-9ae38782a0eb 6203999 1 2020-02-03 21:42:16 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 95146add-6db7-4437-8992-9c12fca78c2c 0xc002ef04f7 0xc002ef04f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ef0558 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:42:16.780: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 3 21:42:16.781: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-6299 /apis/apps/v1/namespaces/deployment-6299/replicasets/test-recreate-deployment-799c574856 1a320843-e4de-4a42-ada1-e44f9109107f 6203987 2 2020-02-03 21:42:06 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 95146add-6db7-4437-8992-9c12fca78c2c 0xc002ef05e7 0xc002ef05e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ef0658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:42:16.839: INFO: Pod "test-recreate-deployment-5f94c574ff-pbng8" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-pbng8 test-recreate-deployment-5f94c574ff- deployment-6299 /api/v1/namespaces/deployment-6299/pods/test-recreate-deployment-5f94c574ff-pbng8 8883348e-90a6-49b1-a2ec-cf3ae26871bf 6204001 0 2020-02-03 21:42:16 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 2e27b75f-a566-4270-9e52-9ae38782a0eb 0xc002ef0c97 0xc002ef0c98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7cmz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7cmz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7cmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:42:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:42:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:42:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 21:42:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-03 21:42:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:42:16.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6299" for this suite. • [SLOW TEST:10.793 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":99,"skipped":1473,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:42:16.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 3 21:42:17.001: INFO: Waiting up to 5m0s for pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92" in namespace "emptydir-3490" to be "success or failure" Feb 3 21:42:17.026: INFO: Pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92": Phase="Pending", Reason="", readiness=false. Elapsed: 25.120181ms Feb 3 21:42:19.042: INFO: Pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041135922s Feb 3 21:42:21.055: INFO: Pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053925336s Feb 3 21:42:23.062: INFO: Pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060870114s Feb 3 21:42:25.068: INFO: Pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067524501s Feb 3 21:42:27.074: INFO: Pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073005544s Feb 3 21:42:29.083: INFO: Pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92": Phase="Pending", Reason="", readiness=false. Elapsed: 12.082435784s Feb 3 21:42:31.093: INFO: Pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.092495567s STEP: Saw pod success Feb 3 21:42:31.093: INFO: Pod "pod-cea8336d-9e09-47fe-b380-d6b77e21ec92" satisfied condition "success or failure" Feb 3 21:42:31.099: INFO: Trying to get logs from node jerma-node pod pod-cea8336d-9e09-47fe-b380-d6b77e21ec92 container test-container: STEP: delete the pod Feb 3 21:42:31.359: INFO: Waiting for pod pod-cea8336d-9e09-47fe-b380-d6b77e21ec92 to disappear Feb 3 21:42:31.371: INFO: Pod pod-cea8336d-9e09-47fe-b380-d6b77e21ec92 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:42:31.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3490" for this suite. • [SLOW TEST:14.515 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:42:31.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 3 21:42:31.572: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 21:42:31.607: INFO: Waiting for terminating namespaces to be deleted... Feb 3 21:42:31.653: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 3 21:42:31.663: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 3 21:42:31.663: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 21:42:31.663: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 3 21:42:31.663: INFO: Container weave ready: true, restart count 1 Feb 3 21:42:31.663: INFO: Container weave-npc ready: true, restart count 0 Feb 3 21:42:31.664: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 3 21:42:31.692: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 3 21:42:31.692: INFO: Container kube-apiserver ready: true, restart count 1 Feb 3 21:42:31.692: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 3 21:42:31.692: INFO: Container etcd ready: true, restart count 1 Feb 3 21:42:31.692: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 3 21:42:31.692: INFO: Container coredns ready: true, restart count 0 Feb 3 21:42:31.692: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 3 21:42:31.692: INFO: Container coredns ready: true, restart count 0 Feb 3 21:42:31.692: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 3 21:42:31.692: INFO: Container kube-controller-manager ready: true, restart count 3 Feb 3 21:42:31.692: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 3 21:42:31.692: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 21:42:31.692: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 3 21:42:31.692: INFO: Container weave ready: true, restart count 0 Feb 3 21:42:31.692: INFO: Container weave-npc ready: true, restart count 0 Feb 3 21:42:31.692: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 3 21:42:31.692: INFO: Container kube-scheduler ready: true, restart count 4 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e7d7208a-494a-4a81-ba5e-64c6fd13f6fc 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-e7d7208a-494a-4a81-ba5e-64c6fd13f6fc off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-e7d7208a-494a-4a81-ba5e-64c6fd13f6fc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:43:00.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7844" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:28.766 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":101,"skipped":1521,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:43:00.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-aa6d2d2b-c5b0-49dc-9aeb-0cbfcd4d2854 STEP: Creating a pod to test consume secrets Feb 3 21:43:00.304: INFO: Waiting up to 5m0s for pod "pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3" in namespace "secrets-8880" to be "success or failure" Feb 3 21:43:00.320: INFO: Pod "pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.576656ms Feb 3 21:43:02.328: INFO: Pod "pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024135086s Feb 3 21:43:04.333: INFO: Pod "pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028934371s Feb 3 21:43:06.338: INFO: Pod "pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033992865s Feb 3 21:43:08.349: INFO: Pod "pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045394849s Feb 3 21:43:10.358: INFO: Pod "pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05393469s STEP: Saw pod success Feb 3 21:43:10.358: INFO: Pod "pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3" satisfied condition "success or failure" Feb 3 21:43:10.365: INFO: Trying to get logs from node jerma-node pod pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3 container secret-volume-test: STEP: delete the pod Feb 3 21:43:10.404: INFO: Waiting for pod pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3 to disappear Feb 3 21:43:10.412: INFO: Pod pod-secrets-e6756837-ac8c-4a60-b14f-070657acf7f3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:43:10.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8880" for this suite. • [SLOW TEST:10.272 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1523,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:43:10.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-af933c44-aa3e-4044-b53b-3f6701d8ab95 STEP: Creating a pod to test consume configMaps Feb 3 21:43:10.692: INFO: Waiting up to 5m0s for pod "pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66" in namespace "configmap-5841" to be "success or failure" Feb 3 21:43:10.838: INFO: Pod "pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66": Phase="Pending", Reason="", readiness=false. Elapsed: 145.78376ms Feb 3 21:43:13.107: INFO: Pod "pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.41438724s Feb 3 21:43:15.119: INFO: Pod "pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.426342953s Feb 3 21:43:17.126: INFO: Pod "pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433762047s Feb 3 21:43:19.135: INFO: Pod "pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.443026684s STEP: Saw pod success Feb 3 21:43:19.136: INFO: Pod "pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66" satisfied condition "success or failure" Feb 3 21:43:19.141: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66 container configmap-volume-test: STEP: delete the pod Feb 3 21:43:19.195: INFO: Waiting for pod pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66 to disappear Feb 3 21:43:19.207: INFO: Pod pod-configmaps-d44fba79-e265-4600-ae1d-59fa12d85c66 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:43:19.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5841" for this suite. • [SLOW TEST:8.816 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:43:19.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 3 21:43:19.330: INFO: Waiting up to 5m0s for pod "pod-ca1aff61-33c5-4c74-8a57-e5d56b393603" in namespace "emptydir-6040" to be "success or failure" Feb 3 21:43:19.412: INFO: Pod "pod-ca1aff61-33c5-4c74-8a57-e5d56b393603": Phase="Pending", Reason="", readiness=false. Elapsed: 82.685887ms Feb 3 21:43:21.419: INFO: Pod "pod-ca1aff61-33c5-4c74-8a57-e5d56b393603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089332274s Feb 3 21:43:23.427: INFO: Pod "pod-ca1aff61-33c5-4c74-8a57-e5d56b393603": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097136994s Feb 3 21:43:25.432: INFO: Pod "pod-ca1aff61-33c5-4c74-8a57-e5d56b393603": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102573673s Feb 3 21:43:27.440: INFO: Pod "pod-ca1aff61-33c5-4c74-8a57-e5d56b393603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109907315s STEP: Saw pod success Feb 3 21:43:27.440: INFO: Pod "pod-ca1aff61-33c5-4c74-8a57-e5d56b393603" satisfied condition "success or failure" Feb 3 21:43:27.444: INFO: Trying to get logs from node jerma-node pod pod-ca1aff61-33c5-4c74-8a57-e5d56b393603 container test-container: STEP: delete the pod Feb 3 21:43:27.486: INFO: Waiting for pod pod-ca1aff61-33c5-4c74-8a57-e5d56b393603 to disappear Feb 3 21:43:27.493: INFO: Pod pod-ca1aff61-33c5-4c74-8a57-e5d56b393603 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:43:27.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6040" for this suite. • [SLOW TEST:8.293 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1553,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:43:27.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:44:01.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8477" for this suite. STEP: Destroying namespace "nsdeletetest-1070" for this suite. Feb 3 21:44:01.205: INFO: Namespace nsdeletetest-1070 was already deleted STEP: Destroying namespace "nsdeletetest-4133" for this suite. • [SLOW TEST:33.708 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":105,"skipped":1593,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:44:01.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5458 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 3 21:44:01.395: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 3 21:44:37.622: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5458 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:44:37.622: INFO: >>> kubeConfig: /root/.kube/config I0203 21:44:37.683613 8 log.go:172] (0xc002661550) (0xc001654be0) Create stream I0203 21:44:37.683712 8 log.go:172] (0xc002661550) (0xc001654be0) Stream added, broadcasting: 1 I0203 21:44:37.693448 8 log.go:172] (0xc002661550) Reply frame received for 1 I0203 21:44:37.693510 8 log.go:172] (0xc002661550) (0xc002354000) Create stream I0203 21:44:37.693531 8 log.go:172] (0xc002661550) (0xc002354000) Stream added, broadcasting: 3 I0203 21:44:37.695950 8 log.go:172] (0xc002661550) Reply frame received for 3 I0203 21:44:37.696028 8 log.go:172] (0xc002661550) (0xc0015b9ae0) Create stream I0203 21:44:37.696059 8 log.go:172] (0xc002661550) (0xc0015b9ae0) Stream added, broadcasting: 5 I0203 21:44:37.698403 8 log.go:172] (0xc002661550) Reply frame received for 5 I0203 21:44:37.823458 8 log.go:172] (0xc002661550) Data frame received for 3 I0203 21:44:37.823730 8 log.go:172] (0xc002354000) (3) Data frame handling I0203 21:44:37.823830 8 log.go:172] (0xc002354000) (3) Data frame sent I0203 21:44:37.946494 8 log.go:172] (0xc002661550) Data frame received for 1 I0203 21:44:37.946714 8 log.go:172] (0xc002661550) (0xc002354000) Stream removed, broadcasting: 3 I0203 21:44:37.946797 8 log.go:172] (0xc001654be0) (1) Data frame handling I0203 21:44:37.946875 8 log.go:172] (0xc002661550) (0xc0015b9ae0) Stream removed, broadcasting: 5 I0203 21:44:37.946963 8 log.go:172] (0xc001654be0) (1) Data frame sent I0203 21:44:37.946994 8 log.go:172] (0xc002661550) (0xc001654be0) Stream removed, broadcasting: 1 I0203 21:44:37.947048 8 log.go:172] (0xc002661550) Go away received I0203 21:44:37.947437 8 log.go:172] (0xc002661550) (0xc001654be0) Stream removed, broadcasting: 1 I0203 21:44:37.947488 8 log.go:172] (0xc002661550) (0xc002354000) Stream removed, broadcasting: 3 I0203 21:44:37.947507 8 log.go:172] (0xc002661550) (0xc0015b9ae0) Stream removed, broadcasting: 5 Feb 3 21:44:37.947: INFO: Found all expected endpoints: [netserver-0] Feb 3 21:44:37.953: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5458 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:44:37.954: INFO: >>> kubeConfig: /root/.kube/config I0203 21:44:37.987506 8 log.go:172] (0xc001f12840) (0xc001dd25a0) Create stream I0203 21:44:37.987799 8 log.go:172] (0xc001f12840) (0xc001dd25a0) Stream added, broadcasting: 1 I0203 21:44:37.997056 8 log.go:172] (0xc001f12840) Reply frame received for 1 I0203 21:44:37.997132 8 log.go:172] (0xc001f12840) (0xc0017e6140) Create stream I0203 21:44:37.997145 8 log.go:172] (0xc001f12840) (0xc0017e6140) Stream added, broadcasting: 3 I0203 21:44:37.998565 8 log.go:172] (0xc001f12840) Reply frame received for 3 I0203 21:44:37.998666 8 log.go:172] (0xc001f12840) (0xc001dd2640) Create stream I0203 21:44:37.998679 8 log.go:172] (0xc001f12840) (0xc001dd2640) Stream added, broadcasting: 5 I0203 21:44:38.000250 8 log.go:172] (0xc001f12840) Reply frame received for 5 I0203 21:44:38.083318 8 log.go:172] (0xc001f12840) Data frame received for 3 I0203 21:44:38.083381 8 log.go:172] (0xc0017e6140) (3) Data frame handling I0203 21:44:38.083406 8 log.go:172] (0xc0017e6140) (3) Data frame sent I0203 21:44:38.145318 8 log.go:172] (0xc001f12840) Data frame received for 1 I0203 21:44:38.145561 8 log.go:172] (0xc001f12840) (0xc0017e6140) Stream removed, broadcasting: 3 I0203 21:44:38.145612 8 log.go:172] (0xc001dd25a0) (1) Data frame handling I0203 21:44:38.145633 8 log.go:172] (0xc001dd25a0) (1) Data frame sent I0203 21:44:38.145780 8 log.go:172] (0xc001f12840) (0xc001dd2640) Stream removed, broadcasting: 5 I0203 21:44:38.146010 8 log.go:172] (0xc001f12840) (0xc001dd25a0) Stream removed, broadcasting: 1 I0203 21:44:38.146047 8 log.go:172] (0xc001f12840) Go away received I0203 21:44:38.146393 8 log.go:172] (0xc001f12840) (0xc001dd25a0) Stream removed, broadcasting: 1 I0203 21:44:38.146422 8 log.go:172] (0xc001f12840) (0xc0017e6140) Stream removed, broadcasting: 3 I0203 21:44:38.146447 8 log.go:172] (0xc001f12840) (0xc001dd2640) Stream removed, broadcasting: 5 Feb 3 21:44:38.146: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:44:38.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5458" for this suite. • [SLOW TEST:36.902 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:44:38.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-84c4811e-f9c5-41a9-8e43-071466c6b5fd STEP: Creating secret with name secret-projected-all-test-volume-2ee48181-d094-471a-9aa5-303dc28fa214 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 3 21:44:38.306: INFO: Waiting up to 5m0s for pod "projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61" in namespace "projected-7921" to be "success or failure" Feb 3 21:44:38.315: INFO: Pod "projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61": Phase="Pending", Reason="", readiness=false. Elapsed: 9.293172ms Feb 3 21:44:40.327: INFO: Pod "projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021002734s Feb 3 21:44:42.333: INFO: Pod "projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026736496s Feb 3 21:44:44.338: INFO: Pod "projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031771672s Feb 3 21:44:46.343: INFO: Pod "projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037457589s STEP: Saw pod success Feb 3 21:44:46.344: INFO: Pod "projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61" satisfied condition "success or failure" Feb 3 21:44:46.347: INFO: Trying to get logs from node jerma-node pod projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61 container projected-all-volume-test: STEP: delete the pod Feb 3 21:44:46.409: INFO: Waiting for pod projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61 to disappear Feb 3 21:44:46.432: INFO: Pod projected-volume-b4e28dad-bb98-49a6-be59-6bb6c1349c61 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:44:46.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7921" for this suite. • [SLOW TEST:8.303 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:44:46.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Feb 3 21:44:47.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2879' Feb 3 21:44:50.695: INFO: stderr: "" Feb 3 21:44:50.695: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 21:44:50.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2879' Feb 3 21:44:51.004: INFO: stderr: "" Feb 3 21:44:51.004: INFO: stdout: "update-demo-nautilus-648tl update-demo-nautilus-hvl4c " Feb 3 21:44:51.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-648tl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 3 21:44:51.180: INFO: stderr: "" Feb 3 21:44:51.180: INFO: stdout: "" Feb 3 21:44:51.180: INFO: update-demo-nautilus-648tl is created but not running Feb 3 21:44:56.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2879' Feb 3 21:44:56.876: INFO: stderr: "" Feb 3 21:44:56.876: INFO: stdout: "update-demo-nautilus-648tl update-demo-nautilus-hvl4c " Feb 3 21:44:56.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-648tl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 3 21:44:57.485: INFO: stderr: "" Feb 3 21:44:57.485: INFO: stdout: "" Feb 3 21:44:57.485: INFO: update-demo-nautilus-648tl is created but not running Feb 3 21:45:02.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2879' Feb 3 21:45:02.647: INFO: stderr: "" Feb 3 21:45:02.648: INFO: stdout: "update-demo-nautilus-648tl update-demo-nautilus-hvl4c " Feb 3 21:45:02.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-648tl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 3 21:45:02.753: INFO: stderr: "" Feb 3 21:45:02.753: INFO: stdout: "true" Feb 3 21:45:02.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-648tl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 3 21:45:02.860: INFO: stderr: "" Feb 3 21:45:02.860: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 21:45:02.860: INFO: validating pod update-demo-nautilus-648tl Feb 3 21:45:02.870: INFO: got data: { "image": "nautilus.jpg" } Feb 3 21:45:02.871: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 21:45:02.871: INFO: update-demo-nautilus-648tl is verified up and running Feb 3 21:45:02.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvl4c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 3 21:45:02.979: INFO: stderr: "" Feb 3 21:45:02.979: INFO: stdout: "true" Feb 3 21:45:02.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvl4c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2879' Feb 3 21:45:03.069: INFO: stderr: "" Feb 3 21:45:03.069: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 21:45:03.069: INFO: validating pod update-demo-nautilus-hvl4c Feb 3 21:45:03.075: INFO: got data: { "image": "nautilus.jpg" } Feb 3 21:45:03.075: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 21:45:03.075: INFO: update-demo-nautilus-hvl4c is verified up and running STEP: using delete to clean up resources Feb 3 21:45:03.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2879' Feb 3 21:45:03.210: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 3 21:45:03.211: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 3 21:45:03.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2879' Feb 3 21:45:03.349: INFO: stderr: "No resources found in kubectl-2879 namespace.\n" Feb 3 21:45:03.349: INFO: stdout: "" Feb 3 21:45:03.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2879 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 21:45:03.484: INFO: stderr: "" Feb 3 21:45:03.484: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:45:03.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2879" for this suite. • [SLOW TEST:17.071 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":108,"skipped":1726,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:45:03.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:45:03.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 3 21:45:03.984: INFO: stderr: "" Feb 3 21:45:03.984: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:45:03.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-992" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":109,"skipped":1728,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:45:05.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:45:05.689: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 3 21:45:09.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9573 create -f -' Feb 3 21:45:12.073: INFO: stderr: "" Feb 3 21:45:12.074: INFO: stdout: "e2e-test-crd-publish-openapi-744-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 3 21:45:12.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9573 delete e2e-test-crd-publish-openapi-744-crds test-cr' Feb 3 21:45:12.451: INFO: stderr: "" Feb 3 21:45:12.451: INFO: stdout: "e2e-test-crd-publish-openapi-744-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 3 21:45:12.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9573 apply -f -' Feb 3 21:45:12.943: INFO: stderr: "" Feb 3 21:45:12.943: INFO: stdout: "e2e-test-crd-publish-openapi-744-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 3 21:45:12.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9573 delete e2e-test-crd-publish-openapi-744-crds test-cr' Feb 3 21:45:13.158: INFO: stderr: "" Feb 3 21:45:13.158: INFO: stdout: "e2e-test-crd-publish-openapi-744-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 3 21:45:13.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-744-crds' Feb 3 21:45:13.513: INFO: stderr: "" Feb 3 21:45:13.513: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-744-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:45:15.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9573" for this suite. • [SLOW TEST:10.560 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":110,"skipped":1746,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:45:15.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Feb 3 21:45:16.008: INFO: Waiting up to 5m0s for pod "client-containers-672fd8c0-246b-425a-be4c-3587465375f8" in namespace "containers-5155" to be "success or failure" Feb 3 21:45:16.016: INFO: Pod "client-containers-672fd8c0-246b-425a-be4c-3587465375f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.237687ms Feb 3 21:45:18.025: INFO: Pod "client-containers-672fd8c0-246b-425a-be4c-3587465375f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016708483s Feb 3 21:45:20.029: INFO: Pod "client-containers-672fd8c0-246b-425a-be4c-3587465375f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020638914s Feb 3 21:45:22.049: INFO: Pod "client-containers-672fd8c0-246b-425a-be4c-3587465375f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041031601s STEP: Saw pod success Feb 3 21:45:22.049: INFO: Pod "client-containers-672fd8c0-246b-425a-be4c-3587465375f8" satisfied condition "success or failure" Feb 3 21:45:22.053: INFO: Trying to get logs from node jerma-node pod client-containers-672fd8c0-246b-425a-be4c-3587465375f8 container test-container: STEP: delete the pod Feb 3 21:45:22.103: INFO: Waiting for pod client-containers-672fd8c0-246b-425a-be4c-3587465375f8 to disappear Feb 3 21:45:22.157: INFO: Pod client-containers-672fd8c0-246b-425a-be4c-3587465375f8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:45:22.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5155" for this suite. • [SLOW TEST:6.281 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1759,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:45:22.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-462bc37c-a748-4770-aefd-a88828c8a36f in namespace container-probe-6912 Feb 3 21:45:30.357: INFO: Started pod test-webserver-462bc37c-a748-4770-aefd-a88828c8a36f in namespace container-probe-6912 STEP: checking the pod's current state and verifying that restartCount is present Feb 3 21:45:30.365: INFO: Initial restart count of pod test-webserver-462bc37c-a748-4770-aefd-a88828c8a36f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:49:32.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6912" for this suite. • [SLOW TEST:250.089 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1765,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:49:32.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 3 21:49:32.472: INFO: Waiting up to 5m0s for pod "pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d" in namespace "emptydir-9563" to be "success or failure" Feb 3 21:49:32.580: INFO: Pod "pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d": Phase="Pending", Reason="", readiness=false. Elapsed: 108.438992ms Feb 3 21:49:34.589: INFO: Pod "pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117633341s Feb 3 21:49:36.600: INFO: Pod "pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127845034s Feb 3 21:49:38.606: INFO: Pod "pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134262792s Feb 3 21:49:40.616: INFO: Pod "pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144263591s STEP: Saw pod success Feb 3 21:49:40.616: INFO: Pod "pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d" satisfied condition "success or failure" Feb 3 21:49:40.620: INFO: Trying to get logs from node jerma-node pod pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d container test-container: STEP: delete the pod Feb 3 21:49:40.698: INFO: Waiting for pod pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d to disappear Feb 3 21:49:40.705: INFO: Pod pod-2b9be69c-523b-4e03-820c-5d75c0d7d22d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:49:40.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9563" for this suite. • [SLOW TEST:8.484 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1771,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:49:40.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-fr2s STEP: Creating a pod to test atomic-volume-subpath Feb 3 21:49:41.002: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fr2s" in namespace "subpath-6138" to be "success or failure" Feb 3 21:49:41.017: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Pending", Reason="", readiness=false. Elapsed: 15.211678ms Feb 3 21:49:43.022: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020322523s Feb 3 21:49:45.028: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026278018s Feb 3 21:49:47.034: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032527376s Feb 3 21:49:49.040: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 8.037801035s Feb 3 21:49:51.046: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 10.043811104s Feb 3 21:49:53.052: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 12.050321885s Feb 3 21:49:55.070: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 14.068053334s Feb 3 21:49:57.076: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 16.074045798s Feb 3 21:49:59.083: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 18.081437545s Feb 3 21:50:01.090: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 20.088180339s Feb 3 21:50:03.096: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 22.09381244s Feb 3 21:50:05.104: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 24.101868648s Feb 3 21:50:07.110: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 26.10828914s Feb 3 21:50:09.118: INFO: Pod "pod-subpath-test-configmap-fr2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.116096289s STEP: Saw pod success Feb 3 21:50:09.118: INFO: Pod "pod-subpath-test-configmap-fr2s" satisfied condition "success or failure" Feb 3 21:50:09.122: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-fr2s container test-container-subpath-configmap-fr2s: STEP: delete the pod Feb 3 21:50:09.268: INFO: Waiting for pod pod-subpath-test-configmap-fr2s to disappear Feb 3 21:50:09.277: INFO: Pod pod-subpath-test-configmap-fr2s no longer exists STEP: Deleting pod pod-subpath-test-configmap-fr2s Feb 3 21:50:09.278: INFO: Deleting pod "pod-subpath-test-configmap-fr2s" in namespace "subpath-6138" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:50:09.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6138" for this suite. • [SLOW TEST:28.497 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":114,"skipped":1788,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:50:09.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Feb 3 21:50:09.482: INFO: Waiting up to 5m0s for pod "client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772" in namespace "containers-3699" to be "success or failure" Feb 3 21:50:09.491: INFO: Pod "client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772": Phase="Pending", Reason="", readiness=false. Elapsed: 8.980437ms Feb 3 21:50:11.499: INFO: Pod "client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017184421s Feb 3 21:50:13.505: INFO: Pod "client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023527631s Feb 3 21:50:15.514: INFO: Pod "client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032102018s Feb 3 21:50:17.521: INFO: Pod "client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039627322s STEP: Saw pod success Feb 3 21:50:17.522: INFO: Pod "client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772" satisfied condition "success or failure" Feb 3 21:50:17.527: INFO: Trying to get logs from node jerma-node pod client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772 container test-container: STEP: delete the pod Feb 3 21:50:17.649: INFO: Waiting for pod client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772 to disappear Feb 3 21:50:17.658: INFO: Pod client-containers-36d1bfe1-0723-470f-a1eb-60d54044c772 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:50:17.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3699" for this suite. • [SLOW TEST:8.377 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1839,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:50:17.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:50:17.761: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a" in namespace "projected-725" to be "success or failure" Feb 3 21:50:17.788: INFO: Pod "downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.348477ms Feb 3 21:50:19.798: INFO: Pod "downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036962317s Feb 3 21:50:21.811: INFO: Pod "downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050302547s Feb 3 21:50:23.827: INFO: Pod "downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066309033s Feb 3 21:50:25.843: INFO: Pod "downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082103049s STEP: Saw pod success Feb 3 21:50:25.843: INFO: Pod "downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a" satisfied condition "success or failure" Feb 3 21:50:25.857: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a container client-container: STEP: delete the pod Feb 3 21:50:25.970: INFO: Waiting for pod downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a to disappear Feb 3 21:50:26.031: INFO: Pod downwardapi-volume-c5642b7f-9f83-464d-b5d3-25fd3515a79a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:50:26.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-725" for this suite. • [SLOW TEST:8.367 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1853,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:50:26.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a Feb 3 21:50:26.132: INFO: Pod name my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a: Found 0 pods out of 1 Feb 3 21:50:31.139: INFO: Pod name my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a: Found 1 pods out of 1 Feb 3 21:50:31.139: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a" are running Feb 3 21:50:33.150: INFO: Pod "my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a-jsldg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 21:50:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 21:50:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 21:50:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 21:50:26 +0000 UTC Reason: Message:}]) Feb 3 21:50:33.150: INFO: Trying to dial the pod Feb 3 21:50:38.180: INFO: Controller my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a: Got expected result from replica 1 [my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a-jsldg]: "my-hostname-basic-816b6b15-12a3-4b09-8ebc-8b96f8bf759a-jsldg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:50:38.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6379" for this suite. • [SLOW TEST:12.162 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":117,"skipped":1867,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:50:38.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:50:39.096: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:50:41.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:50:43.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:50:45.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363439, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:50:48.175: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:50:48.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2401-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:50:49.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7027" for this suite. STEP: Destroying namespace "webhook-7027-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.668 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":118,"skipped":1877,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:50:49.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:50:51.192: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:50:53.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:50:55.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:50:57.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:51:00.636: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:51:00.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5769-crds.webhook.example.com via the AdmissionRegistration API Feb 3 21:51:01.249: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:51:02.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-717" for this suite. STEP: Destroying namespace "webhook-717-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.448 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":119,"skipped":1879,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:51:02.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:51:03.123: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:51:05.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:51:07.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:51:09.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:51:11.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363463, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:51:14.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:51:14.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2741" for this suite. STEP: Destroying namespace "webhook-2741-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.185 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":120,"skipped":1883,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:51:14.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:51:14.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381" in namespace "projected-8689" to be "success or failure" Feb 3 21:51:14.626: INFO: Pod "downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381": Phase="Pending", Reason="", readiness=false. Elapsed: 17.085335ms Feb 3 21:51:16.637: INFO: Pod "downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028052214s Feb 3 21:51:18.644: INFO: Pod "downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035284496s Feb 3 21:51:20.655: INFO: Pod "downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046704854s Feb 3 21:51:22.674: INFO: Pod "downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065193807s Feb 3 21:51:24.679: INFO: Pod "downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070474625s STEP: Saw pod success Feb 3 21:51:24.679: INFO: Pod "downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381" satisfied condition "success or failure" Feb 3 21:51:24.685: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381 container client-container: STEP: delete the pod Feb 3 21:51:24.754: INFO: Waiting for pod downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381 to disappear Feb 3 21:51:24.766: INFO: Pod downwardapi-volume-54b83e0d-96f9-41e6-b033-f7f21dfd8381 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:51:24.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8689" for this suite. • [SLOW TEST:10.279 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:51:24.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-562283fd-f8c1-450c-9a1e-7e163698f84c STEP: Creating a pod to test consume configMaps Feb 3 21:51:24.906: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0" in namespace "configmap-5493" to be "success or failure" Feb 3 21:51:24.981: INFO: Pod "pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0": Phase="Pending", Reason="", readiness=false. Elapsed: 74.337658ms Feb 3 21:51:26.990: INFO: Pod "pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083684148s Feb 3 21:51:29.105: INFO: Pod "pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198910199s Feb 3 21:51:31.113: INFO: Pod "pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206755698s Feb 3 21:51:33.137: INFO: Pod "pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.230742518s STEP: Saw pod success Feb 3 21:51:33.137: INFO: Pod "pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0" satisfied condition "success or failure" Feb 3 21:51:33.152: INFO: Trying to get logs from node jerma-node pod pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0 container configmap-volume-test: STEP: delete the pod Feb 3 21:51:33.193: INFO: Waiting for pod pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0 to disappear Feb 3 21:51:33.196: INFO: Pod pod-configmaps-dbb83dff-5584-4a60-b594-7143d99043d0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:51:33.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5493" for this suite. • [SLOW TEST:8.415 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1911,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:51:33.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:51:33.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8345" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":123,"skipped":1931,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:51:33.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:51:49.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3716" for this suite. • [SLOW TEST:16.642 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":124,"skipped":1939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:51:49.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:51:50.260: INFO: Waiting up to 5m0s for pod "downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b" in namespace "downward-api-417" to be "success or failure" Feb 3 21:51:50.300: INFO: Pod "downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.962919ms Feb 3 21:51:52.309: INFO: Pod "downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049223799s Feb 3 21:51:54.317: INFO: Pod "downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057216807s Feb 3 21:51:56.349: INFO: Pod "downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088808644s Feb 3 21:51:58.359: INFO: Pod "downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099081445s STEP: Saw pod success Feb 3 21:51:58.360: INFO: Pod "downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b" satisfied condition "success or failure" Feb 3 21:51:58.368: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b container client-container: STEP: delete the pod Feb 3 21:51:58.415: INFO: Waiting for pod downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b to disappear Feb 3 21:51:58.420: INFO: Pod downwardapi-volume-315b1c5e-038b-4234-8851-a0f9af11fc7b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:51:58.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-417" for this suite. • [SLOW TEST:8.470 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1965,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:51:58.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7307.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7307.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7307.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7307.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 120.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.120_udp@PTR;check="$$(dig +tcp +noall +answer +search 120.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.120_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7307.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7307.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7307.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7307.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7307.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7307.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7307.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 120.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.120_udp@PTR;check="$$(dig +tcp +noall +answer +search 120.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.120_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 21:52:08.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:08.655: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:08.661: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:08.666: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:08.697: INFO: Unable to read jessie_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:08.701: INFO: Unable to read jessie_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:08.706: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:08.711: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:08.740: INFO: Lookups using dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09 failed for: [wheezy_udp@dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_udp@dns-test-service.dns-7307.svc.cluster.local jessie_tcp@dns-test-service.dns-7307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local] Feb 3 21:52:13.758: INFO: Unable to read wheezy_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:13.765: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:13.772: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:13.784: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:13.840: INFO: Unable to read jessie_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:13.846: INFO: Unable to read jessie_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:13.852: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:13.856: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:13.898: INFO: Lookups using dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09 failed for: [wheezy_udp@dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_udp@dns-test-service.dns-7307.svc.cluster.local jessie_tcp@dns-test-service.dns-7307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local] Feb 3 21:52:18.746: INFO: Unable to read wheezy_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:18.748: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:18.751: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:18.754: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:18.871: INFO: Unable to read jessie_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:18.876: INFO: Unable to read jessie_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:18.880: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:18.883: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:18.908: INFO: Lookups using dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09 failed for: [wheezy_udp@dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_udp@dns-test-service.dns-7307.svc.cluster.local jessie_tcp@dns-test-service.dns-7307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local] Feb 3 21:52:23.753: INFO: Unable to read wheezy_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:23.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:23.762: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:23.765: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:23.811: INFO: Unable to read jessie_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:23.814: INFO: Unable to read jessie_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:23.816: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:23.819: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:23.836: INFO: Lookups using dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09 failed for: [wheezy_udp@dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_udp@dns-test-service.dns-7307.svc.cluster.local jessie_tcp@dns-test-service.dns-7307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local] Feb 3 21:52:28.748: INFO: Unable to read wheezy_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:28.764: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:28.769: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:28.774: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:28.913: INFO: Unable to read jessie_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:28.917: INFO: Unable to read jessie_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:28.921: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:28.926: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:28.986: INFO: Lookups using dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09 failed for: [wheezy_udp@dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_udp@dns-test-service.dns-7307.svc.cluster.local jessie_tcp@dns-test-service.dns-7307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local] Feb 3 21:52:33.748: INFO: Unable to read wheezy_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:33.753: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:33.757: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:33.762: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:33.817: INFO: Unable to read jessie_udp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:33.829: INFO: Unable to read jessie_tcp@dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:33.837: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:33.844: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local from pod dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09: the server could not find the requested resource (get pods dns-test-214505b9-716f-436f-8b39-a1dd3231ba09) Feb 3 21:52:33.886: INFO: Lookups using dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09 failed for: [wheezy_udp@dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@dns-test-service.dns-7307.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_udp@dns-test-service.dns-7307.svc.cluster.local jessie_tcp@dns-test-service.dns-7307.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7307.svc.cluster.local] Feb 3 21:52:38.854: INFO: DNS probes using dns-7307/dns-test-214505b9-716f-436f-8b39-a1dd3231ba09 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:52:39.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7307" for this suite. • [SLOW TEST:40.617 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":126,"skipped":1987,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:52:39.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 3 21:52:39.266: INFO: PodSpec: initContainers in spec.initContainers Feb 3 21:53:35.429: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e8d56c5f-13ec-4dde-842f-404d25d904d7", GenerateName:"", Namespace:"init-container-6777", SelfLink:"/api/v1/namespaces/init-container-6777/pods/pod-init-e8d56c5f-13ec-4dde-842f-404d25d904d7", UID:"ea1809ed-3855-4c2c-9b68-4796664aee55", ResourceVersion:"6206576", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716363559, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"266264032"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4wggn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002650000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4wggn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4wggn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4wggn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00203c088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002abc240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00203c160)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00203c180)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00203c188), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00203c18c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363560, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363560, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363560, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363559, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0024480a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00294a070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00294a0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://259b9071a59b8b3b0b0eb7ab77937e42e67e41afa9f53b3e35fa83575bb8a4de", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002448120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024480e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00203c26f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:53:35.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6777" for this suite. • [SLOW TEST:56.393 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":127,"skipped":2005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:53:35.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:53:42.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1658" for this suite. • [SLOW TEST:7.370 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":128,"skipped":2033,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:53:42.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:53:43.586: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:53:45.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:53:47.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:53:49.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363623, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:53:52.716: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:53:52.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7059" for this suite. STEP: Destroying namespace "webhook-7059-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":129,"skipped":2064,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:53:52.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:53:53.002: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 50.452718ms)
Feb  3 21:53:53.007: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.616697ms)
Feb  3 21:53:53.010: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.70502ms)
Feb  3 21:53:53.013: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.768514ms)
Feb  3 21:53:53.016: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.8975ms)
Feb  3 21:53:53.019: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.477263ms)
Feb  3 21:53:53.022: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.030962ms)
Feb  3 21:53:53.025: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.731113ms)
Feb  3 21:53:53.027: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.572134ms)
Feb  3 21:53:53.030: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.400246ms)
Feb  3 21:53:53.032: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.582126ms)
Feb  3 21:53:53.035: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.867733ms)
Feb  3 21:53:53.039: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.872211ms)
Feb  3 21:53:53.042: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.992882ms)
Feb  3 21:53:53.046: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.747397ms)
Feb  3 21:53:53.049: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.701843ms)
Feb  3 21:53:53.052: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.16997ms)
Feb  3 21:53:53.055: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.821134ms)
Feb  3 21:53:53.058: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.752076ms)
Feb  3 21:53:53.063: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.972212ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:53.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1632" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":130,"skipped":2069,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:53.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:53:53.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb  3 21:53:57.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 create -f -'
Feb  3 21:53:59.952: INFO: stderr: ""
Feb  3 21:53:59.953: INFO: stdout: "e2e-test-crd-publish-openapi-8095-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb  3 21:53:59.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 delete e2e-test-crd-publish-openapi-8095-crds test-cr'
Feb  3 21:54:00.101: INFO: stderr: ""
Feb  3 21:54:00.101: INFO: stdout: "e2e-test-crd-publish-openapi-8095-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Feb  3 21:54:00.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 apply -f -'
Feb  3 21:54:00.539: INFO: stderr: ""
Feb  3 21:54:00.539: INFO: stdout: "e2e-test-crd-publish-openapi-8095-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb  3 21:54:00.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 delete e2e-test-crd-publish-openapi-8095-crds test-cr'
Feb  3 21:54:00.827: INFO: stderr: ""
Feb  3 21:54:00.827: INFO: stdout: "e2e-test-crd-publish-openapi-8095-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Feb  3 21:54:00.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8095-crds'
Feb  3 21:54:01.284: INFO: stderr: ""
Feb  3 21:54:01.284: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8095-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:54:05.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3167" for this suite.

• [SLOW TEST:11.976 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":131,"skipped":2076,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:54:05.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-ec834da3-3721-42c9-a650-699f9f5a1b93
STEP: Creating a pod to test consume secrets
Feb  3 21:54:05.247: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb" in namespace "projected-3549" to be "success or failure"
Feb  3 21:54:05.256: INFO: Pod "pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.832158ms
Feb  3 21:54:07.264: INFO: Pod "pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017051412s
Feb  3 21:54:09.272: INFO: Pod "pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024669174s
Feb  3 21:54:11.278: INFO: Pod "pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031074694s
Feb  3 21:54:13.284: INFO: Pod "pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037321572s
STEP: Saw pod success
Feb  3 21:54:13.284: INFO: Pod "pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb" satisfied condition "success or failure"
Feb  3 21:54:13.289: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 21:54:13.406: INFO: Waiting for pod pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb to disappear
Feb  3 21:54:13.459: INFO: Pod pod-projected-secrets-01e072ef-65f4-45b0-835a-e0b537514eeb no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:54:13.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3549" for this suite.

• [SLOW TEST:8.416 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2087,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:54:13.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  3 21:54:13.674: INFO: Waiting up to 5m0s for pod "pod-67f89389-770a-4286-b80f-b77b149945ea" in namespace "emptydir-7964" to be "success or failure"
Feb  3 21:54:13.700: INFO: Pod "pod-67f89389-770a-4286-b80f-b77b149945ea": Phase="Pending", Reason="", readiness=false. Elapsed: 26.437815ms
Feb  3 21:54:15.708: INFO: Pod "pod-67f89389-770a-4286-b80f-b77b149945ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034273932s
Feb  3 21:54:17.716: INFO: Pod "pod-67f89389-770a-4286-b80f-b77b149945ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042388472s
Feb  3 21:54:19.735: INFO: Pod "pod-67f89389-770a-4286-b80f-b77b149945ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061062557s
Feb  3 21:54:21.741: INFO: Pod "pod-67f89389-770a-4286-b80f-b77b149945ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067416513s
Feb  3 21:54:23.749: INFO: Pod "pod-67f89389-770a-4286-b80f-b77b149945ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07521548s
STEP: Saw pod success
Feb  3 21:54:23.749: INFO: Pod "pod-67f89389-770a-4286-b80f-b77b149945ea" satisfied condition "success or failure"
Feb  3 21:54:23.753: INFO: Trying to get logs from node jerma-node pod pod-67f89389-770a-4286-b80f-b77b149945ea container test-container: 
STEP: delete the pod
Feb  3 21:54:23.991: INFO: Waiting for pod pod-67f89389-770a-4286-b80f-b77b149945ea to disappear
Feb  3 21:54:24.015: INFO: Pod pod-67f89389-770a-4286-b80f-b77b149945ea no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:54:24.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7964" for this suite.

• [SLOW TEST:10.560 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2121,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:54:24.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:54:30.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9998" for this suite.

• [SLOW TEST:6.350 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":134,"skipped":2124,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:54:30.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:54:30.544: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:54:31.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8140" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":135,"skipped":2130,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:54:31.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  3 21:54:39.168: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:54:39.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1659" for this suite.

• [SLOW TEST:7.489 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2151,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:54:39.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Feb  3 21:54:39.418: INFO: Waiting up to 5m0s for pod "var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc" in namespace "var-expansion-1013" to be "success or failure"
Feb  3 21:54:39.425: INFO: Pod "var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.679118ms
Feb  3 21:54:41.464: INFO: Pod "var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046072526s
Feb  3 21:54:43.469: INFO: Pod "var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051631414s
Feb  3 21:54:45.481: INFO: Pod "var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063582718s
Feb  3 21:54:47.488: INFO: Pod "var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070113889s
STEP: Saw pod success
Feb  3 21:54:47.488: INFO: Pod "var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc" satisfied condition "success or failure"
Feb  3 21:54:47.492: INFO: Trying to get logs from node jerma-node pod var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc container dapi-container: 
STEP: delete the pod
Feb  3 21:54:47.528: INFO: Waiting for pod var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc to disappear
Feb  3 21:54:47.533: INFO: Pod var-expansion-b135866c-1efa-4847-8ecd-fa1f4439e5fc no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:54:47.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1013" for this suite.

• [SLOW TEST:8.257 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2174,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:54:47.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb  3 21:54:56.370: INFO: Successfully updated pod "annotationupdateed026e90-fae3-417c-a3da-91f1d75eabbb"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:54:58.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6298" for this suite.

• [SLOW TEST:11.026 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2189,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:54:58.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb  3 21:54:58.709: INFO: >>> kubeConfig: /root/.kube/config
Feb  3 21:55:01.155: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:55:16.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8088" for this suite.

• [SLOW TEST:17.841 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":139,"skipped":2191,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:55:16.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  3 21:55:32.631: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 21:55:32.682: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 21:55:34.682: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 21:55:34.689: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 21:55:36.682: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 21:55:36.689: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 21:55:38.683: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 21:55:38.691: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 21:55:40.682: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 21:55:40.687: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 21:55:42.682: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 21:55:42.686: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:55:42.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5887" for this suite.

• [SLOW TEST:26.278 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2198,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:55:42.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 21:55:43.516: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 21:55:45.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:55:47.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:55:49.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363743, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 21:55:52.590: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Feb  3 21:55:52.667: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:55:52.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3660" for this suite.
STEP: Destroying namespace "webhook-3660-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.187 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":141,"skipped":2202,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:55:52.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:56:03.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3278" for this suite.

• [SLOW TEST:10.223 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2209,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:56:03.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:56:03.199: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.95655ms)
Feb  3 21:56:03.242: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 42.004836ms)
Feb  3 21:56:03.248: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.597885ms)
Feb  3 21:56:03.254: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.201848ms)
Feb  3 21:56:03.262: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.862678ms)
Feb  3 21:56:03.268: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.784705ms)
Feb  3 21:56:03.277: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.089653ms)
Feb  3 21:56:03.283: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.844732ms)
Feb  3 21:56:03.290: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.445151ms)
Feb  3 21:56:03.296: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.024018ms)
Feb  3 21:56:03.301: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.772029ms)
Feb  3 21:56:03.310: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.181568ms)
Feb  3 21:56:03.316: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.729297ms)
Feb  3 21:56:03.322: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.024066ms)
Feb  3 21:56:03.328: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.126863ms)
Feb  3 21:56:03.334: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.812077ms)
Feb  3 21:56:03.340: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.046529ms)
Feb  3 21:56:03.345: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.743599ms)
Feb  3 21:56:03.349: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.27261ms)
Feb  3 21:56:03.354: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.409227ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:56:03.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7376" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":143,"skipped":2220,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:56:03.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-429e22d3-48a5-4eb2-8d07-73be393e83df
STEP: Creating a pod to test consume configMaps
Feb  3 21:56:03.548: INFO: Waiting up to 5m0s for pod "pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b" in namespace "configmap-5196" to be "success or failure"
Feb  3 21:56:03.573: INFO: Pod "pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.903572ms
Feb  3 21:56:05.583: INFO: Pod "pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034899238s
Feb  3 21:56:07.592: INFO: Pod "pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04411267s
Feb  3 21:56:09.599: INFO: Pod "pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051618267s
Feb  3 21:56:11.613: INFO: Pod "pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064916373s
STEP: Saw pod success
Feb  3 21:56:11.613: INFO: Pod "pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b" satisfied condition "success or failure"
Feb  3 21:56:11.625: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b container configmap-volume-test: 
STEP: delete the pod
Feb  3 21:56:11.701: INFO: Waiting for pod pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b to disappear
Feb  3 21:56:11.754: INFO: Pod pod-configmaps-d972c683-d881-4f2e-b467-b00033c9723b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:56:11.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5196" for this suite.

• [SLOW TEST:8.408 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2243,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:56:11.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5067.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5067.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 21:56:22.023: INFO: DNS probes using dns-5067/dns-test-08bb49a5-0adc-488d-aaa6-a79c787faf89 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:56:22.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5067" for this suite.

• [SLOW TEST:10.338 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":145,"skipped":2256,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:56:22.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  3 21:56:22.329: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Feb  3 21:56:23.178: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb  3 21:56:26.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:56:28.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:56:30.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:56:32.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:56:34.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716363783, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:56:37.307: INFO: Waited 936.538641ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:56:37.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5219" for this suite.

• [SLOW TEST:15.800 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":146,"skipped":2318,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:56:37.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  3 21:56:49.183: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:56:49.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3708" for this suite.

• [SLOW TEST:11.313 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":147,"skipped":2334,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:56:49.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Feb  3 21:56:49.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Feb  3 21:57:04.481: INFO: >>> kubeConfig: /root/.kube/config
Feb  3 21:57:07.697: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:57:22.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4704" for this suite.

• [SLOW TEST:33.204 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":148,"skipped":2343,"failed":0}
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:57:22.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-4485
STEP: creating replication controller nodeport-test in namespace services-4485
I0203 21:57:22.690970       8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-4485, replica count: 2
I0203 21:57:25.742195       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 21:57:28.743089       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 21:57:31.743713       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 21:57:34.744072       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 21:57:37.744454       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 21:57:37.744: INFO: Creating new exec pod
Feb  3 21:57:44.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4485 execpodmtg7p -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Feb  3 21:57:45.202: INFO: stderr: "I0203 21:57:45.031123    1396 log.go:172] (0xc000107340) (0xc00067de00) Create stream\nI0203 21:57:45.031284    1396 log.go:172] (0xc000107340) (0xc00067de00) Stream added, broadcasting: 1\nI0203 21:57:45.033773    1396 log.go:172] (0xc000107340) Reply frame received for 1\nI0203 21:57:45.033836    1396 log.go:172] (0xc000107340) (0xc00067dea0) Create stream\nI0203 21:57:45.033849    1396 log.go:172] (0xc000107340) (0xc00067dea0) Stream added, broadcasting: 3\nI0203 21:57:45.034945    1396 log.go:172] (0xc000107340) Reply frame received for 3\nI0203 21:57:45.034981    1396 log.go:172] (0xc000107340) (0xc000a963c0) Create stream\nI0203 21:57:45.035003    1396 log.go:172] (0xc000107340) (0xc000a963c0) Stream added, broadcasting: 5\nI0203 21:57:45.036321    1396 log.go:172] (0xc000107340) Reply frame received for 5\nI0203 21:57:45.106874    1396 log.go:172] (0xc000107340) Data frame received for 5\nI0203 21:57:45.106956    1396 log.go:172] (0xc000a963c0) (5) Data frame handling\nI0203 21:57:45.106985    1396 log.go:172] (0xc000a963c0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0203 21:57:45.114520    1396 log.go:172] (0xc000107340) Data frame received for 5\nI0203 21:57:45.114564    1396 log.go:172] (0xc000a963c0) (5) Data frame handling\nI0203 21:57:45.114582    1396 log.go:172] (0xc000a963c0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0203 21:57:45.188340    1396 log.go:172] (0xc000107340) (0xc00067dea0) Stream removed, broadcasting: 3\nI0203 21:57:45.188519    1396 log.go:172] (0xc000107340) Data frame received for 1\nI0203 21:57:45.188531    1396 log.go:172] (0xc00067de00) (1) Data frame handling\nI0203 21:57:45.188549    1396 log.go:172] (0xc00067de00) (1) Data frame sent\nI0203 21:57:45.188809    1396 log.go:172] (0xc000107340) (0xc00067de00) Stream removed, broadcasting: 1\nI0203 21:57:45.189151    1396 log.go:172] (0xc000107340) (0xc000a963c0) Stream removed, broadcasting: 5\nI0203 21:57:45.190470    1396 log.go:172] (0xc000107340) Go away received\nI0203 21:57:45.190592    1396 log.go:172] (0xc000107340) (0xc00067de00) Stream removed, broadcasting: 1\nI0203 21:57:45.190657    1396 log.go:172] (0xc000107340) (0xc00067dea0) Stream removed, broadcasting: 3\nI0203 21:57:45.190678    1396 log.go:172] (0xc000107340) (0xc000a963c0) Stream removed, broadcasting: 5\n"
Feb  3 21:57:45.202: INFO: stdout: ""
Feb  3 21:57:45.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4485 execpodmtg7p -- /bin/sh -x -c nc -zv -t -w 2 10.96.137.225 80'
Feb  3 21:57:45.480: INFO: stderr: "I0203 21:57:45.327818    1414 log.go:172] (0xc0009b4e70) (0xc000a32280) Create stream\nI0203 21:57:45.327982    1414 log.go:172] (0xc0009b4e70) (0xc000a32280) Stream added, broadcasting: 1\nI0203 21:57:45.330108    1414 log.go:172] (0xc0009b4e70) Reply frame received for 1\nI0203 21:57:45.330135    1414 log.go:172] (0xc0009b4e70) (0xc000a820a0) Create stream\nI0203 21:57:45.330141    1414 log.go:172] (0xc0009b4e70) (0xc000a820a0) Stream added, broadcasting: 3\nI0203 21:57:45.330855    1414 log.go:172] (0xc0009b4e70) Reply frame received for 3\nI0203 21:57:45.330877    1414 log.go:172] (0xc0009b4e70) (0xc000a32320) Create stream\nI0203 21:57:45.330881    1414 log.go:172] (0xc0009b4e70) (0xc000a32320) Stream added, broadcasting: 5\nI0203 21:57:45.331646    1414 log.go:172] (0xc0009b4e70) Reply frame received for 5\nI0203 21:57:45.388731    1414 log.go:172] (0xc0009b4e70) Data frame received for 5\nI0203 21:57:45.388797    1414 log.go:172] (0xc000a32320) (5) Data frame handling\nI0203 21:57:45.388815    1414 log.go:172] (0xc000a32320) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.137.225 80\nI0203 21:57:45.390784    1414 log.go:172] (0xc0009b4e70) Data frame received for 5\nI0203 21:57:45.390795    1414 log.go:172] (0xc000a32320) (5) Data frame handling\nI0203 21:57:45.390807    1414 log.go:172] (0xc000a32320) (5) Data frame sent\nConnection to 10.96.137.225 80 port [tcp/http] succeeded!\nI0203 21:57:45.470587    1414 log.go:172] (0xc0009b4e70) Data frame received for 1\nI0203 21:57:45.470708    1414 log.go:172] (0xc0009b4e70) (0xc000a820a0) Stream removed, broadcasting: 3\nI0203 21:57:45.470761    1414 log.go:172] (0xc000a32280) (1) Data frame handling\nI0203 21:57:45.470793    1414 log.go:172] (0xc000a32280) (1) Data frame sent\nI0203 21:57:45.470853    1414 log.go:172] (0xc0009b4e70) (0xc000a32320) Stream removed, broadcasting: 5\nI0203 21:57:45.470911    1414 log.go:172] (0xc0009b4e70) (0xc000a32280) Stream removed, broadcasting: 1\nI0203 21:57:45.470937    1414 log.go:172] (0xc0009b4e70) Go away received\nI0203 21:57:45.471973    1414 log.go:172] (0xc0009b4e70) (0xc000a32280) Stream removed, broadcasting: 1\nI0203 21:57:45.471994    1414 log.go:172] (0xc0009b4e70) (0xc000a820a0) Stream removed, broadcasting: 3\nI0203 21:57:45.472003    1414 log.go:172] (0xc0009b4e70) (0xc000a32320) Stream removed, broadcasting: 5\n"
Feb  3 21:57:45.480: INFO: stdout: ""
Feb  3 21:57:45.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4485 execpodmtg7p -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30048'
Feb  3 21:57:45.832: INFO: stderr: "I0203 21:57:45.625487    1433 log.go:172] (0xc0000f4580) (0xc000baa000) Create stream\nI0203 21:57:45.625647    1433 log.go:172] (0xc0000f4580) (0xc000baa000) Stream added, broadcasting: 1\nI0203 21:57:45.629601    1433 log.go:172] (0xc0000f4580) Reply frame received for 1\nI0203 21:57:45.629640    1433 log.go:172] (0xc0000f4580) (0xc000936000) Create stream\nI0203 21:57:45.629654    1433 log.go:172] (0xc0000f4580) (0xc000936000) Stream added, broadcasting: 3\nI0203 21:57:45.630951    1433 log.go:172] (0xc0000f4580) Reply frame received for 3\nI0203 21:57:45.630970    1433 log.go:172] (0xc0000f4580) (0xc0005cfd60) Create stream\nI0203 21:57:45.630978    1433 log.go:172] (0xc0000f4580) (0xc0005cfd60) Stream added, broadcasting: 5\nI0203 21:57:45.631940    1433 log.go:172] (0xc0000f4580) Reply frame received for 5\nI0203 21:57:45.721366    1433 log.go:172] (0xc0000f4580) Data frame received for 5\nI0203 21:57:45.721406    1433 log.go:172] (0xc0005cfd60) (5) Data frame handling\nI0203 21:57:45.721436    1433 log.go:172] (0xc0005cfd60) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30048\nI0203 21:57:45.723472    1433 log.go:172] (0xc0000f4580) Data frame received for 5\nI0203 21:57:45.723493    1433 log.go:172] (0xc0005cfd60) (5) Data frame handling\nI0203 21:57:45.723506    1433 log.go:172] (0xc0005cfd60) (5) Data frame sent\nConnection to 10.96.2.250 30048 port [tcp/30048] succeeded!\nI0203 21:57:45.817925    1433 log.go:172] (0xc0000f4580) (0xc000936000) Stream removed, broadcasting: 3\nI0203 21:57:45.818042    1433 log.go:172] (0xc0000f4580) Data frame received for 1\nI0203 21:57:45.818085    1433 log.go:172] (0xc000baa000) (1) Data frame handling\nI0203 21:57:45.818123    1433 log.go:172] (0xc000baa000) (1) Data frame sent\nI0203 21:57:45.818139    1433 log.go:172] (0xc0000f4580) (0xc0005cfd60) Stream removed, broadcasting: 5\nI0203 21:57:45.818251    1433 log.go:172] (0xc0000f4580) (0xc000baa000) Stream removed, broadcasting: 1\nI0203 21:57:45.818268    1433 log.go:172] (0xc0000f4580) Go away received\nI0203 21:57:45.819698    1433 log.go:172] (0xc0000f4580) (0xc000baa000) Stream removed, broadcasting: 1\nI0203 21:57:45.819713    1433 log.go:172] (0xc0000f4580) (0xc000936000) Stream removed, broadcasting: 3\nI0203 21:57:45.819720    1433 log.go:172] (0xc0000f4580) (0xc0005cfd60) Stream removed, broadcasting: 5\n"
Feb  3 21:57:45.832: INFO: stdout: ""
Feb  3 21:57:45.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4485 execpodmtg7p -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30048'
Feb  3 21:57:46.227: INFO: stderr: "I0203 21:57:46.034946    1453 log.go:172] (0xc000990b00) (0xc000a1a3c0) Create stream\nI0203 21:57:46.035137    1453 log.go:172] (0xc000990b00) (0xc000a1a3c0) Stream added, broadcasting: 1\nI0203 21:57:46.040709    1453 log.go:172] (0xc000990b00) Reply frame received for 1\nI0203 21:57:46.040750    1453 log.go:172] (0xc000990b00) (0xc000868000) Create stream\nI0203 21:57:46.040756    1453 log.go:172] (0xc000990b00) (0xc000868000) Stream added, broadcasting: 3\nI0203 21:57:46.041740    1453 log.go:172] (0xc000990b00) Reply frame received for 3\nI0203 21:57:46.041758    1453 log.go:172] (0xc000990b00) (0xc000a1a460) Create stream\nI0203 21:57:46.041764    1453 log.go:172] (0xc000990b00) (0xc000a1a460) Stream added, broadcasting: 5\nI0203 21:57:46.043153    1453 log.go:172] (0xc000990b00) Reply frame received for 5\nI0203 21:57:46.139090    1453 log.go:172] (0xc000990b00) Data frame received for 5\nI0203 21:57:46.139342    1453 log.go:172] (0xc000a1a460) (5) Data frame handling\nI0203 21:57:46.139369    1453 log.go:172] (0xc000a1a460) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30048\nI0203 21:57:46.151100    1453 log.go:172] (0xc000990b00) Data frame received for 5\nI0203 21:57:46.151158    1453 log.go:172] (0xc000a1a460) (5) Data frame handling\nI0203 21:57:46.151177    1453 log.go:172] (0xc000a1a460) (5) Data frame sent\nConnection to 10.96.1.234 30048 port [tcp/30048] succeeded!\nI0203 21:57:46.216701    1453 log.go:172] (0xc000990b00) (0xc000868000) Stream removed, broadcasting: 3\nI0203 21:57:46.216818    1453 log.go:172] (0xc000990b00) Data frame received for 1\nI0203 21:57:46.216844    1453 log.go:172] (0xc000a1a3c0) (1) Data frame handling\nI0203 21:57:46.216861    1453 log.go:172] (0xc000a1a3c0) (1) Data frame sent\nI0203 21:57:46.216869    1453 log.go:172] (0xc000990b00) (0xc000a1a3c0) Stream removed, broadcasting: 1\nI0203 21:57:46.216919    1453 log.go:172] (0xc000990b00) (0xc000a1a460) Stream removed, broadcasting: 5\nI0203 21:57:46.216997    1453 log.go:172] (0xc000990b00) Go away received\nI0203 21:57:46.217651    1453 log.go:172] (0xc000990b00) (0xc000a1a3c0) Stream removed, broadcasting: 1\nI0203 21:57:46.217671    1453 log.go:172] (0xc000990b00) (0xc000868000) Stream removed, broadcasting: 3\nI0203 21:57:46.217677    1453 log.go:172] (0xc000990b00) (0xc000a1a460) Stream removed, broadcasting: 5\n"
Feb  3 21:57:46.227: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:57:46.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4485" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.798 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":149,"skipped":2343,"failed":0}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:57:46.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6357
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  3 21:57:46.350: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  3 21:58:22.654: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.44.0.3&port=8081&tries=1'] Namespace:pod-network-test-6357 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:58:22.655: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:58:22.704613       8 log.go:172] (0xc0024a78c0) (0xc00225ca00) Create stream
I0203 21:58:22.704800       8 log.go:172] (0xc0024a78c0) (0xc00225ca00) Stream added, broadcasting: 1
I0203 21:58:22.708182       8 log.go:172] (0xc0024a78c0) Reply frame received for 1
I0203 21:58:22.708233       8 log.go:172] (0xc0024a78c0) (0xc0022de000) Create stream
I0203 21:58:22.708247       8 log.go:172] (0xc0024a78c0) (0xc0022de000) Stream added, broadcasting: 3
I0203 21:58:22.709745       8 log.go:172] (0xc0024a78c0) Reply frame received for 3
I0203 21:58:22.709827       8 log.go:172] (0xc0024a78c0) (0xc0022de140) Create stream
I0203 21:58:22.709835       8 log.go:172] (0xc0024a78c0) (0xc0022de140) Stream added, broadcasting: 5
I0203 21:58:22.711352       8 log.go:172] (0xc0024a78c0) Reply frame received for 5
I0203 21:58:22.791250       8 log.go:172] (0xc0024a78c0) Data frame received for 3
I0203 21:58:22.791320       8 log.go:172] (0xc0022de000) (3) Data frame handling
I0203 21:58:22.791338       8 log.go:172] (0xc0022de000) (3) Data frame sent
I0203 21:58:22.888079       8 log.go:172] (0xc0024a78c0) Data frame received for 1
I0203 21:58:22.888399       8 log.go:172] (0xc00225ca00) (1) Data frame handling
I0203 21:58:22.888666       8 log.go:172] (0xc00225ca00) (1) Data frame sent
I0203 21:58:22.889094       8 log.go:172] (0xc0024a78c0) (0xc0022de140) Stream removed, broadcasting: 5
I0203 21:58:22.889225       8 log.go:172] (0xc0024a78c0) (0xc0022de000) Stream removed, broadcasting: 3
I0203 21:58:22.889391       8 log.go:172] (0xc0024a78c0) (0xc00225ca00) Stream removed, broadcasting: 1
I0203 21:58:22.889589       8 log.go:172] (0xc0024a78c0) Go away received
I0203 21:58:22.890346       8 log.go:172] (0xc0024a78c0) (0xc00225ca00) Stream removed, broadcasting: 1
I0203 21:58:22.890402       8 log.go:172] (0xc0024a78c0) (0xc0022de000) Stream removed, broadcasting: 3
I0203 21:58:22.890438       8 log.go:172] (0xc0024a78c0) (0xc0022de140) Stream removed, broadcasting: 5
Feb  3 21:58:22.890: INFO: Waiting for responses: map[]
Feb  3 21:58:22.900: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.32.0.5&port=8081&tries=1'] Namespace:pod-network-test-6357 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:58:22.900: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:58:22.952228       8 log.go:172] (0xc001b6e630) (0xc0022ded20) Create stream
I0203 21:58:22.952386       8 log.go:172] (0xc001b6e630) (0xc0022ded20) Stream added, broadcasting: 1
I0203 21:58:22.957309       8 log.go:172] (0xc001b6e630) Reply frame received for 1
I0203 21:58:22.957345       8 log.go:172] (0xc001b6e630) (0xc001cd15e0) Create stream
I0203 21:58:22.957356       8 log.go:172] (0xc001b6e630) (0xc001cd15e0) Stream added, broadcasting: 3
I0203 21:58:22.958727       8 log.go:172] (0xc001b6e630) Reply frame received for 3
I0203 21:58:22.958760       8 log.go:172] (0xc001b6e630) (0xc001681360) Create stream
I0203 21:58:22.958769       8 log.go:172] (0xc001b6e630) (0xc001681360) Stream added, broadcasting: 5
I0203 21:58:22.960124       8 log.go:172] (0xc001b6e630) Reply frame received for 5
I0203 21:58:23.067039       8 log.go:172] (0xc001b6e630) Data frame received for 3
I0203 21:58:23.067123       8 log.go:172] (0xc001cd15e0) (3) Data frame handling
I0203 21:58:23.067150       8 log.go:172] (0xc001cd15e0) (3) Data frame sent
I0203 21:58:23.144964       8 log.go:172] (0xc001b6e630) (0xc001cd15e0) Stream removed, broadcasting: 3
I0203 21:58:23.145292       8 log.go:172] (0xc001b6e630) Data frame received for 1
I0203 21:58:23.145540       8 log.go:172] (0xc0022ded20) (1) Data frame handling
I0203 21:58:23.145719       8 log.go:172] (0xc0022ded20) (1) Data frame sent
I0203 21:58:23.145817       8 log.go:172] (0xc001b6e630) (0xc0022ded20) Stream removed, broadcasting: 1
I0203 21:58:23.145888       8 log.go:172] (0xc001b6e630) (0xc001681360) Stream removed, broadcasting: 5
I0203 21:58:23.146229       8 log.go:172] (0xc001b6e630) (0xc0022ded20) Stream removed, broadcasting: 1
I0203 21:58:23.146246       8 log.go:172] (0xc001b6e630) (0xc001cd15e0) Stream removed, broadcasting: 3
I0203 21:58:23.146262       8 log.go:172] (0xc001b6e630) (0xc001681360) Stream removed, broadcasting: 5
Feb  3 21:58:23.146: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0203 21:58:23.146779       8 log.go:172] (0xc001b6e630) Go away received
Feb  3 21:58:23.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6357" for this suite.

• [SLOW TEST:36.919 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2348,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:58:23.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:58:23.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb  3 21:58:27.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9578 create -f -'
Feb  3 21:58:31.900: INFO: stderr: ""
Feb  3 21:58:31.900: INFO: stdout: "e2e-test-crd-publish-openapi-5479-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb  3 21:58:31.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9578 delete e2e-test-crd-publish-openapi-5479-crds test-cr'
Feb  3 21:58:32.642: INFO: stderr: ""
Feb  3 21:58:32.642: INFO: stdout: "e2e-test-crd-publish-openapi-5479-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Feb  3 21:58:32.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9578 apply -f -'
Feb  3 21:58:33.529: INFO: stderr: ""
Feb  3 21:58:33.529: INFO: stdout: "e2e-test-crd-publish-openapi-5479-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb  3 21:58:33.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9578 delete e2e-test-crd-publish-openapi-5479-crds test-cr'
Feb  3 21:58:33.831: INFO: stderr: ""
Feb  3 21:58:33.831: INFO: stdout: "e2e-test-crd-publish-openapi-5479-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb  3 21:58:33.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5479-crds'
Feb  3 21:58:34.237: INFO: stderr: ""
Feb  3 21:58:34.237: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5479-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:58:39.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9578" for this suite.

• [SLOW TEST:16.387 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":151,"skipped":2366,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:58:39.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Feb  3 21:58:40.168: INFO: created pod pod-service-account-defaultsa
Feb  3 21:58:40.168: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  3 21:58:40.274: INFO: created pod pod-service-account-mountsa
Feb  3 21:58:40.274: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  3 21:58:40.294: INFO: created pod pod-service-account-nomountsa
Feb  3 21:58:40.294: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  3 21:58:40.306: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  3 21:58:40.306: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  3 21:58:40.335: INFO: created pod pod-service-account-mountsa-mountspec
Feb  3 21:58:40.335: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  3 21:58:40.364: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  3 21:58:40.364: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  3 21:58:40.426: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  3 21:58:40.426: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  3 21:58:40.465: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  3 21:58:40.465: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  3 21:58:40.494: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  3 21:58:40.495: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:58:40.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1042" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":152,"skipped":2412,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:58:40.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Feb  3 21:58:43.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  3 21:58:44.292: INFO: stderr: ""
Feb  3 21:58:44.292: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:58:44.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7796" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":153,"skipped":2472,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:58:44.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:58:45.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545" in namespace "downward-api-565" to be "success or failure"
Feb  3 21:58:45.676: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Pending", Reason="", readiness=false. Elapsed: 238.86514ms
Feb  3 21:58:47.692: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254886507s
Feb  3 21:58:49.959: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Pending", Reason="", readiness=false. Elapsed: 4.521629075s
Feb  3 21:58:53.414: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Pending", Reason="", readiness=false. Elapsed: 7.976655764s
Feb  3 21:58:55.439: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Pending", Reason="", readiness=false. Elapsed: 10.001226374s
Feb  3 21:58:57.463: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02552616s
Feb  3 21:58:59.472: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Pending", Reason="", readiness=false. Elapsed: 14.035011697s
Feb  3 21:59:01.480: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Pending", Reason="", readiness=false. Elapsed: 16.042552292s
Feb  3 21:59:03.494: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Pending", Reason="", readiness=false. Elapsed: 18.056557926s
Feb  3 21:59:05.499: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.061737074s
STEP: Saw pod success
Feb  3 21:59:05.499: INFO: Pod "downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545" satisfied condition "success or failure"
Feb  3 21:59:05.502: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545 container client-container: 
STEP: delete the pod
Feb  3 21:59:05.623: INFO: Waiting for pod downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545 to disappear
Feb  3 21:59:05.633: INFO: Pod downwardapi-volume-eb9f103b-0f77-4319-b192-fded8198c545 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:59:05.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-565" for this suite.

• [SLOW TEST:20.845 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2475,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:59:05.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Feb  3 21:59:05.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-4549 -- logs-generator --log-lines-total 100 --run-duration 20s'
Feb  3 21:59:06.010: INFO: stderr: ""
Feb  3 21:59:06.010: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Feb  3 21:59:06.010: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Feb  3 21:59:06.010: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4549" to be "running and ready, or succeeded"
Feb  3 21:59:06.020: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.266974ms
Feb  3 21:59:08.030: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019792426s
Feb  3 21:59:10.038: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02744814s
Feb  3 21:59:12.043: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.032613174s
Feb  3 21:59:12.043: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Feb  3 21:59:12.043: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Feb  3 21:59:12.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4549'
Feb  3 21:59:12.245: INFO: stderr: ""
Feb  3 21:59:12.245: INFO: stdout: "I0203 21:59:10.435656       1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/72bk 287\nI0203 21:59:10.635731       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/nm5 566\nI0203 21:59:10.836057       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/vxl 386\nI0203 21:59:11.036112       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/f65z 463\nI0203 21:59:11.235943       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/nmf 514\nI0203 21:59:11.436219       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/xcp 399\nI0203 21:59:11.635943       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/2j5 500\nI0203 21:59:11.836209       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/n962 353\nI0203 21:59:12.035981       1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/gpt 316\n"
STEP: limiting log lines
Feb  3 21:59:12.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4549 --tail=1'
Feb  3 21:59:12.784: INFO: stderr: ""
Feb  3 21:59:12.784: INFO: stdout: "I0203 21:59:12.635933       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/6wll 401\n"
Feb  3 21:59:12.784: INFO: got output "I0203 21:59:12.635933       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/6wll 401\n"
STEP: limiting log bytes
Feb  3 21:59:12.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4549 --limit-bytes=1'
Feb  3 21:59:12.883: INFO: stderr: ""
Feb  3 21:59:12.883: INFO: stdout: "I"
Feb  3 21:59:12.883: INFO: got output "I"
STEP: exposing timestamps
Feb  3 21:59:12.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4549 --tail=1 --timestamps'
Feb  3 21:59:13.000: INFO: stderr: ""
Feb  3 21:59:13.000: INFO: stdout: "2020-02-03T21:59:12.83632144Z I0203 21:59:12.835961       1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/txhb 200\n"
Feb  3 21:59:13.002: INFO: got output "2020-02-03T21:59:12.83632144Z I0203 21:59:12.835961       1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/txhb 200\n"
STEP: restricting to a time range
Feb  3 21:59:15.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4549 --since=1s'
Feb  3 21:59:15.729: INFO: stderr: ""
Feb  3 21:59:15.729: INFO: stdout: "I0203 21:59:14.835844       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/bx62 523\nI0203 21:59:15.035979       1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/666s 378\nI0203 21:59:15.235863       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/rjw 549\nI0203 21:59:15.435906       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/scz8 556\nI0203 21:59:15.636215       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/zhh 428\n"
Feb  3 21:59:15.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4549 --since=24h'
Feb  3 21:59:15.890: INFO: stderr: ""
Feb  3 21:59:15.890: INFO: stdout: "I0203 21:59:10.435656       1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/72bk 287\nI0203 21:59:10.635731       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/nm5 566\nI0203 21:59:10.836057       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/vxl 386\nI0203 21:59:11.036112       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/f65z 463\nI0203 21:59:11.235943       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/nmf 514\nI0203 21:59:11.436219       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/xcp 399\nI0203 21:59:11.635943       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/2j5 500\nI0203 21:59:11.836209       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/n962 353\nI0203 21:59:12.035981       1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/gpt 316\nI0203 21:59:12.236429       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/gwr 512\nI0203 21:59:12.437477       1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/zrn 211\nI0203 21:59:12.635933       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/6wll 401\nI0203 21:59:12.835961       1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/txhb 200\nI0203 21:59:13.035980       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/6cc 287\nI0203 21:59:13.235961       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/8sl 342\nI0203 21:59:13.435959       1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/bt7h 200\nI0203 21:59:13.635977       1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/rwpj 549\nI0203 21:59:13.836008       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/s8bw 565\nI0203 21:59:14.035981       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/xs4 547\nI0203 21:59:14.236051       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/c4z 492\nI0203 21:59:14.436159       1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/ct8n 334\nI0203 21:59:14.635926       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/9hk 251\nI0203 21:59:14.835844       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/bx62 523\nI0203 21:59:15.035979       1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/666s 378\nI0203 21:59:15.235863       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/rjw 549\nI0203 21:59:15.435906       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/scz8 556\nI0203 21:59:15.636215       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/zhh 428\nI0203 21:59:15.836135       1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/xbbj 584\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Feb  3 21:59:15.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4549'
Feb  3 21:59:20.941: INFO: stderr: ""
Feb  3 21:59:20.941: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:59:20.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4549" for this suite.

• [SLOW TEST:15.312 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":155,"skipped":2477,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:59:20.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-ba31e306-8031-4efb-8179-d3f31d6e6291
STEP: Creating a pod to test consume secrets
Feb  3 21:59:21.247: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7" in namespace "projected-5371" to be "success or failure"
Feb  3 21:59:21.269: INFO: Pod "pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.198734ms
Feb  3 21:59:23.277: INFO: Pod "pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030244896s
Feb  3 21:59:25.284: INFO: Pod "pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036948954s
Feb  3 21:59:27.290: INFO: Pod "pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042883759s
Feb  3 21:59:29.297: INFO: Pod "pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050464984s
STEP: Saw pod success
Feb  3 21:59:29.298: INFO: Pod "pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7" satisfied condition "success or failure"
Feb  3 21:59:29.301: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7 container secret-volume-test: 
STEP: delete the pod
Feb  3 21:59:29.474: INFO: Waiting for pod pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7 to disappear
Feb  3 21:59:29.528: INFO: Pod pod-projected-secrets-c57c3259-3bef-45b5-820d-abc42fbb8bd7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:59:29.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5371" for this suite.

• [SLOW TEST:8.590 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2498,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:59:29.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:59:59.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3909" for this suite.

• [SLOW TEST:30.157 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":157,"skipped":2548,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:59:59.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  3 22:00:00.377: INFO: Number of nodes with available pods: 0
Feb  3 22:00:00.378: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:01.392: INFO: Number of nodes with available pods: 0
Feb  3 22:00:01.392: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:02.770: INFO: Number of nodes with available pods: 0
Feb  3 22:00:02.770: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:03.390: INFO: Number of nodes with available pods: 0
Feb  3 22:00:03.390: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:04.388: INFO: Number of nodes with available pods: 0
Feb  3 22:00:04.388: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:06.600: INFO: Number of nodes with available pods: 0
Feb  3 22:00:06.600: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:08.189: INFO: Number of nodes with available pods: 0
Feb  3 22:00:08.189: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:08.510: INFO: Number of nodes with available pods: 1
Feb  3 22:00:08.510: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  3 22:00:09.441: INFO: Number of nodes with available pods: 1
Feb  3 22:00:09.441: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  3 22:00:10.390: INFO: Number of nodes with available pods: 2
Feb  3 22:00:10.390: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  3 22:00:10.437: INFO: Number of nodes with available pods: 1
Feb  3 22:00:10.438: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:11.451: INFO: Number of nodes with available pods: 1
Feb  3 22:00:11.451: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:12.450: INFO: Number of nodes with available pods: 1
Feb  3 22:00:12.450: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:13.447: INFO: Number of nodes with available pods: 1
Feb  3 22:00:13.447: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:14.452: INFO: Number of nodes with available pods: 1
Feb  3 22:00:14.452: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:15.462: INFO: Number of nodes with available pods: 1
Feb  3 22:00:15.462: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:16.457: INFO: Number of nodes with available pods: 1
Feb  3 22:00:16.458: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:17.449: INFO: Number of nodes with available pods: 1
Feb  3 22:00:17.450: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:18.453: INFO: Number of nodes with available pods: 1
Feb  3 22:00:18.453: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:19.452: INFO: Number of nodes with available pods: 1
Feb  3 22:00:19.453: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:20.453: INFO: Number of nodes with available pods: 1
Feb  3 22:00:20.453: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:00:21.482: INFO: Number of nodes with available pods: 2
Feb  3 22:00:21.482: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5378, will wait for the garbage collector to delete the pods
Feb  3 22:00:21.561: INFO: Deleting DaemonSet.extensions daemon-set took: 17.996801ms
Feb  3 22:00:21.961: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.40379ms
Feb  3 22:00:28.895: INFO: Number of nodes with available pods: 0
Feb  3 22:00:28.895: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 22:00:28.900: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5378/daemonsets","resourceVersion":"6208642"},"items":null}

Feb  3 22:00:28.906: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5378/pods","resourceVersion":"6208642"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:00:28.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5378" for this suite.

• [SLOW TEST:29.225 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":158,"skipped":2555,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:00:28.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-8c365d77-dd90-4562-9055-e2db5a6bc155
STEP: Creating a pod to test consume configMaps
Feb  3 22:00:29.065: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d" in namespace "projected-6617" to be "success or failure"
Feb  3 22:00:29.088: INFO: Pod "pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.391642ms
Feb  3 22:00:31.093: INFO: Pod "pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027635616s
Feb  3 22:00:33.100: INFO: Pod "pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035190847s
Feb  3 22:00:35.108: INFO: Pod "pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04224374s
Feb  3 22:00:37.112: INFO: Pod "pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046995046s
STEP: Saw pod success
Feb  3 22:00:37.112: INFO: Pod "pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d" satisfied condition "success or failure"
Feb  3 22:00:37.116: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 22:00:37.214: INFO: Waiting for pod pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d to disappear
Feb  3 22:00:37.218: INFO: Pod pod-projected-configmaps-e500eed3-6892-4819-86c5-3141773e4c0d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:00:37.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6617" for this suite.

• [SLOW TEST:8.293 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2571,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:00:37.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-2300
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2300 to expose endpoints map[]
Feb  3 22:00:37.405: INFO: Get endpoints failed (13.420595ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  3 22:00:38.413: INFO: successfully validated that service multi-endpoint-test in namespace services-2300 exposes endpoints map[] (1.022280895s elapsed)
STEP: Creating pod pod1 in namespace services-2300
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2300 to expose endpoints map[pod1:[100]]
Feb  3 22:00:42.549: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.12118654s elapsed, will retry)
Feb  3 22:00:44.571: INFO: successfully validated that service multi-endpoint-test in namespace services-2300 exposes endpoints map[pod1:[100]] (6.143216841s elapsed)
STEP: Creating pod pod2 in namespace services-2300
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2300 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  3 22:00:49.129: INFO: Unexpected endpoints: found map[ead673ab-695f-4815-ba63-d02e35b90d14:[100]], expected map[pod1:[100] pod2:[101]] (4.549859614s elapsed, will retry)
Feb  3 22:00:51.219: INFO: successfully validated that service multi-endpoint-test in namespace services-2300 exposes endpoints map[pod1:[100] pod2:[101]] (6.640186145s elapsed)
STEP: Deleting pod pod1 in namespace services-2300
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2300 to expose endpoints map[pod2:[101]]
Feb  3 22:00:52.418: INFO: successfully validated that service multi-endpoint-test in namespace services-2300 exposes endpoints map[pod2:[101]] (1.193576703s elapsed)
STEP: Deleting pod pod2 in namespace services-2300
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2300 to expose endpoints map[]
Feb  3 22:00:53.465: INFO: successfully validated that service multi-endpoint-test in namespace services-2300 exposes endpoints map[] (1.039246755s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:00:54.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2300" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.743 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":160,"skipped":2591,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:00:54.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Feb  3 22:00:55.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3159'
Feb  3 22:00:55.686: INFO: stderr: ""
Feb  3 22:00:55.686: INFO: stdout: "pod/pause created\n"
Feb  3 22:00:55.686: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  3 22:00:55.686: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3159" to be "running and ready"
Feb  3 22:00:56.341: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 654.481612ms
Feb  3 22:00:58.348: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662251744s
Feb  3 22:01:00.395: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.709124194s
Feb  3 22:01:02.401: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.715246312s
Feb  3 22:01:04.413: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.727232662s
Feb  3 22:01:04.414: INFO: Pod "pause" satisfied condition "running and ready"
Feb  3 22:01:04.414: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  3 22:01:04.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3159'
Feb  3 22:01:04.571: INFO: stderr: ""
Feb  3 22:01:04.571: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  3 22:01:04.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3159'
Feb  3 22:01:04.712: INFO: stderr: ""
Feb  3 22:01:04.713: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  3 22:01:04.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3159'
Feb  3 22:01:04.827: INFO: stderr: ""
Feb  3 22:01:04.827: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  3 22:01:04.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3159'
Feb  3 22:01:04.947: INFO: stderr: ""
Feb  3 22:01:04.948: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Feb  3 22:01:04.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3159'
Feb  3 22:01:05.113: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 22:01:05.114: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  3 22:01:05.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3159'
Feb  3 22:01:05.355: INFO: stderr: "No resources found in kubectl-3159 namespace.\n"
Feb  3 22:01:05.355: INFO: stdout: ""
Feb  3 22:01:05.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3159 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  3 22:01:05.540: INFO: stderr: ""
Feb  3 22:01:05.540: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:01:05.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3159" for this suite.

• [SLOW TEST:10.587 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":161,"skipped":2601,"failed":0}
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:01:05.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:01:05.737: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2de86db7-2b44-4864-af31-d592fa720ef3" in namespace "security-context-test-869" to be "success or failure"
Feb  3 22:01:05.769: INFO: Pod "alpine-nnp-false-2de86db7-2b44-4864-af31-d592fa720ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 31.966908ms
Feb  3 22:01:07.776: INFO: Pod "alpine-nnp-false-2de86db7-2b44-4864-af31-d592fa720ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039130804s
Feb  3 22:01:09.785: INFO: Pod "alpine-nnp-false-2de86db7-2b44-4864-af31-d592fa720ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04787992s
Feb  3 22:01:11.801: INFO: Pod "alpine-nnp-false-2de86db7-2b44-4864-af31-d592fa720ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064371618s
Feb  3 22:01:13.819: INFO: Pod "alpine-nnp-false-2de86db7-2b44-4864-af31-d592fa720ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082580473s
Feb  3 22:01:15.847: INFO: Pod "alpine-nnp-false-2de86db7-2b44-4864-af31-d592fa720ef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110708521s
Feb  3 22:01:15.848: INFO: Pod "alpine-nnp-false-2de86db7-2b44-4864-af31-d592fa720ef3" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:01:15.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-869" for this suite.

• [SLOW TEST:10.357 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2601,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:01:15.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:01:16.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4775" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":163,"skipped":2670,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:01:16.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9541, will wait for the garbage collector to delete the pods
Feb  3 22:01:28.290: INFO: Deleting Job.batch foo took: 8.300463ms
Feb  3 22:01:28.691: INFO: Terminating Job.batch foo pods took: 401.112769ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:02:12.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9541" for this suite.

• [SLOW TEST:56.264 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":164,"skipped":2688,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:02:12.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  3 22:02:12.687: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  3 22:02:12.732: INFO: Waiting for terminating namespaces to be deleted...
Feb  3 22:02:12.735: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  3 22:02:12.745: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  3 22:02:12.745: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  3 22:02:12.745: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  3 22:02:12.745: INFO: 	Container weave ready: true, restart count 1
Feb  3 22:02:12.745: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 22:02:12.745: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  3 22:02:12.808: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  3 22:02:12.809: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  3 22:02:12.809: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  3 22:02:12.809: INFO: 	Container etcd ready: true, restart count 1
Feb  3 22:02:12.809: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  3 22:02:12.809: INFO: 	Container coredns ready: true, restart count 0
Feb  3 22:02:12.809: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  3 22:02:12.809: INFO: 	Container coredns ready: true, restart count 0
Feb  3 22:02:12.809: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  3 22:02:12.809: INFO: 	Container kube-controller-manager ready: true, restart count 3
Feb  3 22:02:12.809: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  3 22:02:12.809: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  3 22:02:12.809: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  3 22:02:12.809: INFO: 	Container weave ready: true, restart count 0
Feb  3 22:02:12.809: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 22:02:12.809: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  3 22:02:12.809: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8d28db3d-6e15-450c-bd19-6184e90c9c99 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8d28db3d-6e15-450c-bd19-6184e90c9c99 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8d28db3d-6e15-450c-bd19-6184e90c9c99
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:02:29.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1337" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.755 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":165,"skipped":2784,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:02:29.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-054cd4d5-2e17-4b69-86a7-f62fbb9459a4
STEP: Creating a pod to test consume secrets
Feb  3 22:02:29.540: INFO: Waiting up to 5m0s for pod "pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16" in namespace "secrets-6091" to be "success or failure"
Feb  3 22:02:29.547: INFO: Pod "pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.996208ms
Feb  3 22:02:31.554: INFO: Pod "pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013107144s
Feb  3 22:02:33.562: INFO: Pod "pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021187196s
Feb  3 22:02:35.572: INFO: Pod "pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031753087s
Feb  3 22:02:37.581: INFO: Pod "pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040226904s
STEP: Saw pod success
Feb  3 22:02:37.581: INFO: Pod "pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16" satisfied condition "success or failure"
Feb  3 22:02:37.587: INFO: Trying to get logs from node jerma-node pod pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16 container secret-volume-test: 
STEP: delete the pod
Feb  3 22:02:37.853: INFO: Waiting for pod pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16 to disappear
Feb  3 22:02:37.859: INFO: Pod pod-secrets-cc36b3ca-b63d-42cf-8783-c997e46bee16 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:02:37.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6091" for this suite.
STEP: Destroying namespace "secret-namespace-1798" for this suite.

• [SLOW TEST:8.770 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2801,"failed":0}
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:02:37.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:02:38.026: INFO: Creating ReplicaSet my-hostname-basic-c992d16a-fabb-406b-a3fd-aeb7b4193ff5
Feb  3 22:02:38.116: INFO: Pod name my-hostname-basic-c992d16a-fabb-406b-a3fd-aeb7b4193ff5: Found 1 pods out of 1
Feb  3 22:02:38.116: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c992d16a-fabb-406b-a3fd-aeb7b4193ff5" is running
Feb  3 22:02:48.178: INFO: Pod "my-hostname-basic-c992d16a-fabb-406b-a3fd-aeb7b4193ff5-d56jc" is running (conditions: [])
Feb  3 22:02:48.178: INFO: Trying to dial the pod
Feb  3 22:02:53.201: INFO: Controller my-hostname-basic-c992d16a-fabb-406b-a3fd-aeb7b4193ff5: Got expected result from replica 1 [my-hostname-basic-c992d16a-fabb-406b-a3fd-aeb7b4193ff5-d56jc]: "my-hostname-basic-c992d16a-fabb-406b-a3fd-aeb7b4193ff5-d56jc", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:02:53.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3921" for this suite.

• [SLOW TEST:15.274 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":167,"skipped":2801,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:02:53.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Feb  3 22:02:53.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:03:15.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8321" for this suite.

• [SLOW TEST:22.307 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":168,"skipped":2815,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:03:15.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-rwsx
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 22:03:15.603: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rwsx" in namespace "subpath-6238" to be "success or failure"
Feb  3 22:03:15.609: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Pending", Reason="", readiness=false. Elapsed: 5.200934ms
Feb  3 22:03:17.615: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011322242s
Feb  3 22:03:19.620: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016250125s
Feb  3 22:03:21.779: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175323763s
Feb  3 22:03:23.789: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 8.185978533s
Feb  3 22:03:25.806: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 10.202241586s
Feb  3 22:03:27.821: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 12.217291882s
Feb  3 22:03:29.831: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 14.227058801s
Feb  3 22:03:31.840: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 16.236137411s
Feb  3 22:03:33.848: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 18.244568085s
Feb  3 22:03:35.858: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 20.254030776s
Feb  3 22:03:37.870: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 22.266627808s
Feb  3 22:03:39.877: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 24.273579016s
Feb  3 22:03:41.884: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Running", Reason="", readiness=true. Elapsed: 26.280988631s
Feb  3 22:03:43.899: INFO: Pod "pod-subpath-test-downwardapi-rwsx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.295426306s
STEP: Saw pod success
Feb  3 22:03:43.899: INFO: Pod "pod-subpath-test-downwardapi-rwsx" satisfied condition "success or failure"
Feb  3 22:03:43.905: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-rwsx container test-container-subpath-downwardapi-rwsx: 
STEP: delete the pod
Feb  3 22:03:43.995: INFO: Waiting for pod pod-subpath-test-downwardapi-rwsx to disappear
Feb  3 22:03:44.068: INFO: Pod pod-subpath-test-downwardapi-rwsx no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-rwsx
Feb  3 22:03:44.068: INFO: Deleting pod "pod-subpath-test-downwardapi-rwsx" in namespace "subpath-6238"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:03:44.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6238" for this suite.

• [SLOW TEST:28.588 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":169,"skipped":2819,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:03:44.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-c659981f-8204-4ba8-b436-f77b25842656 in namespace container-probe-837
Feb  3 22:03:52.551: INFO: Started pod busybox-c659981f-8204-4ba8-b436-f77b25842656 in namespace container-probe-837
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 22:03:52.556: INFO: Initial restart count of pod busybox-c659981f-8204-4ba8-b436-f77b25842656 is 0
Feb  3 22:04:40.810: INFO: Restart count of pod container-probe-837/busybox-c659981f-8204-4ba8-b436-f77b25842656 is now 1 (48.254202662s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:04:40.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-837" for this suite.

• [SLOW TEST:56.835 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2836,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:04:40.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  3 22:04:41.041: INFO: Waiting up to 5m0s for pod "pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab" in namespace "emptydir-314" to be "success or failure"
Feb  3 22:04:41.096: INFO: Pod "pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 55.363452ms
Feb  3 22:04:43.105: INFO: Pod "pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064435571s
Feb  3 22:04:45.117: INFO: Pod "pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075771013s
Feb  3 22:04:47.124: INFO: Pod "pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083552636s
Feb  3 22:04:49.131: INFO: Pod "pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090618404s
Feb  3 22:04:51.138: INFO: Pod "pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097625062s
STEP: Saw pod success
Feb  3 22:04:51.139: INFO: Pod "pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab" satisfied condition "success or failure"
Feb  3 22:04:51.142: INFO: Trying to get logs from node jerma-node pod pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab container test-container: 
STEP: delete the pod
Feb  3 22:04:51.185: INFO: Waiting for pod pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab to disappear
Feb  3 22:04:51.188: INFO: Pod pod-ddfb6253-bdbe-4be3-a4f1-001feed3f5ab no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:04:51.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-314" for this suite.

• [SLOW TEST:10.249 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2855,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:04:51.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-36fe5eaf-985a-44d0-ae5c-ec6b35bb130e in namespace container-probe-9357
Feb  3 22:04:59.320: INFO: Started pod liveness-36fe5eaf-985a-44d0-ae5c-ec6b35bb130e in namespace container-probe-9357
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 22:04:59.326: INFO: Initial restart count of pod liveness-36fe5eaf-985a-44d0-ae5c-ec6b35bb130e is 0
Feb  3 22:05:17.432: INFO: Restart count of pod container-probe-9357/liveness-36fe5eaf-985a-44d0-ae5c-ec6b35bb130e is now 1 (18.10630273s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:05:17.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9357" for this suite.

• [SLOW TEST:26.330 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2861,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:05:17.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 22:05:17.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119" in namespace "projected-5642" to be "success or failure"
Feb  3 22:05:17.655: INFO: Pod "downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119": Phase="Pending", Reason="", readiness=false. Elapsed: 39.240655ms
Feb  3 22:05:19.664: INFO: Pod "downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047691802s
Feb  3 22:05:21.674: INFO: Pod "downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057608558s
Feb  3 22:05:23.680: INFO: Pod "downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06416902s
Feb  3 22:05:25.686: INFO: Pod "downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069909836s
STEP: Saw pod success
Feb  3 22:05:25.686: INFO: Pod "downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119" satisfied condition "success or failure"
Feb  3 22:05:25.689: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119 container client-container: 
STEP: delete the pod
Feb  3 22:05:25.733: INFO: Waiting for pod downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119 to disappear
Feb  3 22:05:25.746: INFO: Pod downwardapi-volume-2dff6190-c928-4f0e-bdea-a6c0ae94a119 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:05:25.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5642" for this suite.

• [SLOW TEST:8.226 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2862,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:05:25.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Feb  3 22:05:25.883: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix630537635/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:05:25.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2624" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":174,"skipped":2864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:05:26.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1807.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1807.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1807.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 22:05:38.315: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:38.323: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:38.330: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:38.336: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:38.360: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:38.366: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:38.372: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:38.377: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:38.386: INFO: Lookups using dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local]

Feb  3 22:05:43.396: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:43.400: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:43.403: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:43.412: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:43.438: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:43.441: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:43.444: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:43.448: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:43.456: INFO: Lookups using dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local]

Feb  3 22:05:48.396: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:48.401: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:48.412: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:48.418: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:48.438: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:48.444: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:48.450: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:48.455: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:48.467: INFO: Lookups using dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local]

Feb  3 22:05:53.400: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:53.408: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:53.417: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:53.425: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:53.444: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:53.450: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:53.455: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:53.461: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:53.474: INFO: Lookups using dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local]

Feb  3 22:05:58.396: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:58.401: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:58.405: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:58.410: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:58.477: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:58.483: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:58.488: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:58.492: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:05:58.500: INFO: Lookups using dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local]

Feb  3 22:06:03.396: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:06:03.401: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:06:03.406: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:06:03.413: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:06:03.428: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:06:03.435: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:06:03.441: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:06:03.447: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local from pod dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78: the server could not find the requested resource (get pods dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78)
Feb  3 22:06:03.460: INFO: Lookups using dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1807.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1807.svc.cluster.local jessie_udp@dns-test-service-2.dns-1807.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1807.svc.cluster.local]

Feb  3 22:06:08.539: INFO: DNS probes using dns-1807/dns-test-129ad9fc-b919-4c0d-a9e7-8a66ff99bd78 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:06:08.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1807" for this suite.

• [SLOW TEST:43.060 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":175,"skipped":2904,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:06:09.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1439
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-1439
I0203 22:06:09.397134       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1439, replica count: 2
I0203 22:06:12.448652       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:06:15.449280       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:06:18.449942       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:06:21.450720       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 22:06:21.450: INFO: Creating new exec pod
Feb  3 22:06:30.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1439 execpodmv5v2 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb  3 22:06:30.958: INFO: stderr: "I0203 22:06:30.735112    1923 log.go:172] (0xc0001174a0) (0xc0008ae5a0) Create stream\nI0203 22:06:30.735372    1923 log.go:172] (0xc0001174a0) (0xc0008ae5a0) Stream added, broadcasting: 1\nI0203 22:06:30.739473    1923 log.go:172] (0xc0001174a0) Reply frame received for 1\nI0203 22:06:30.739542    1923 log.go:172] (0xc0001174a0) (0xc0009ac8c0) Create stream\nI0203 22:06:30.739555    1923 log.go:172] (0xc0001174a0) (0xc0009ac8c0) Stream added, broadcasting: 3\nI0203 22:06:30.740844    1923 log.go:172] (0xc0001174a0) Reply frame received for 3\nI0203 22:06:30.740881    1923 log.go:172] (0xc0001174a0) (0xc0009ac960) Create stream\nI0203 22:06:30.740887    1923 log.go:172] (0xc0001174a0) (0xc0009ac960) Stream added, broadcasting: 5\nI0203 22:06:30.741882    1923 log.go:172] (0xc0001174a0) Reply frame received for 5\nI0203 22:06:30.829143    1923 log.go:172] (0xc0001174a0) Data frame received for 5\nI0203 22:06:30.829249    1923 log.go:172] (0xc0009ac960) (5) Data frame handling\nI0203 22:06:30.829297    1923 log.go:172] (0xc0009ac960) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0203 22:06:30.833846    1923 log.go:172] (0xc0001174a0) Data frame received for 5\nI0203 22:06:30.833894    1923 log.go:172] (0xc0009ac960) (5) Data frame handling\nI0203 22:06:30.833908    1923 log.go:172] (0xc0009ac960) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0203 22:06:30.937900    1923 log.go:172] (0xc0001174a0) Data frame received for 1\nI0203 22:06:30.938102    1923 log.go:172] (0xc0008ae5a0) (1) Data frame handling\nI0203 22:06:30.938162    1923 log.go:172] (0xc0008ae5a0) (1) Data frame sent\nI0203 22:06:30.938206    1923 log.go:172] (0xc0001174a0) (0xc0008ae5a0) Stream removed, broadcasting: 1\nI0203 22:06:30.938423    1923 log.go:172] (0xc0001174a0) (0xc0009ac8c0) Stream removed, broadcasting: 3\nI0203 22:06:30.938452    1923 log.go:172] (0xc0001174a0) (0xc0009ac960) Stream removed, broadcasting: 5\nI0203 22:06:30.938479    1923 log.go:172] (0xc0001174a0) Go away received\nI0203 22:06:30.939509    1923 log.go:172] (0xc0001174a0) (0xc0008ae5a0) Stream removed, broadcasting: 1\nI0203 22:06:30.939563    1923 log.go:172] (0xc0001174a0) (0xc0009ac8c0) Stream removed, broadcasting: 3\nI0203 22:06:30.939571    1923 log.go:172] (0xc0001174a0) (0xc0009ac960) Stream removed, broadcasting: 5\n"
Feb  3 22:06:30.959: INFO: stdout: ""
Feb  3 22:06:30.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1439 execpodmv5v2 -- /bin/sh -x -c nc -zv -t -w 2 10.96.179.164 80'
Feb  3 22:06:31.255: INFO: stderr: "I0203 22:06:31.094712    1938 log.go:172] (0xc000114580) (0xc00079e140) Create stream\nI0203 22:06:31.094828    1938 log.go:172] (0xc000114580) (0xc00079e140) Stream added, broadcasting: 1\nI0203 22:06:31.097725    1938 log.go:172] (0xc000114580) Reply frame received for 1\nI0203 22:06:31.097758    1938 log.go:172] (0xc000114580) (0xc0007e2000) Create stream\nI0203 22:06:31.097773    1938 log.go:172] (0xc000114580) (0xc0007e2000) Stream added, broadcasting: 3\nI0203 22:06:31.098815    1938 log.go:172] (0xc000114580) Reply frame received for 3\nI0203 22:06:31.098838    1938 log.go:172] (0xc000114580) (0xc00079e1e0) Create stream\nI0203 22:06:31.098846    1938 log.go:172] (0xc000114580) (0xc00079e1e0) Stream added, broadcasting: 5\nI0203 22:06:31.099832    1938 log.go:172] (0xc000114580) Reply frame received for 5\nI0203 22:06:31.161404    1938 log.go:172] (0xc000114580) Data frame received for 5\nI0203 22:06:31.161461    1938 log.go:172] (0xc00079e1e0) (5) Data frame handling\nI0203 22:06:31.161472    1938 log.go:172] (0xc00079e1e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.179.164 80\nI0203 22:06:31.164486    1938 log.go:172] (0xc000114580) Data frame received for 5\nI0203 22:06:31.164513    1938 log.go:172] (0xc00079e1e0) (5) Data frame handling\nI0203 22:06:31.164532    1938 log.go:172] (0xc00079e1e0) (5) Data frame sent\nConnection to 10.96.179.164 80 port [tcp/http] succeeded!\nI0203 22:06:31.242715    1938 log.go:172] (0xc000114580) (0xc0007e2000) Stream removed, broadcasting: 3\nI0203 22:06:31.243031    1938 log.go:172] (0xc000114580) Data frame received for 1\nI0203 22:06:31.243098    1938 log.go:172] (0xc000114580) (0xc00079e1e0) Stream removed, broadcasting: 5\nI0203 22:06:31.243170    1938 log.go:172] (0xc00079e140) (1) Data frame handling\nI0203 22:06:31.243204    1938 log.go:172] (0xc00079e140) (1) Data frame sent\nI0203 22:06:31.243218    1938 log.go:172] (0xc000114580) (0xc00079e140) Stream removed, broadcasting: 1\nI0203 22:06:31.243289    1938 log.go:172] (0xc000114580) Go away received\nI0203 22:06:31.244512    1938 log.go:172] (0xc000114580) (0xc00079e140) Stream removed, broadcasting: 1\nI0203 22:06:31.244558    1938 log.go:172] (0xc000114580) (0xc0007e2000) Stream removed, broadcasting: 3\nI0203 22:06:31.244566    1938 log.go:172] (0xc000114580) (0xc00079e1e0) Stream removed, broadcasting: 5\n"
Feb  3 22:06:31.255: INFO: stdout: ""
Feb  3 22:06:31.255: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:06:31.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1439" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.244 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":176,"skipped":2905,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:06:31.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-057612b4-eac2-4029-a53c-10dfab4dd013
STEP: Creating a pod to test consume secrets
Feb  3 22:06:31.436: INFO: Waiting up to 5m0s for pod "pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568" in namespace "secrets-2981" to be "success or failure"
Feb  3 22:06:31.447: INFO: Pod "pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568": Phase="Pending", Reason="", readiness=false. Elapsed: 10.66325ms
Feb  3 22:06:33.453: INFO: Pod "pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016417247s
Feb  3 22:06:35.466: INFO: Pod "pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029272691s
Feb  3 22:06:37.472: INFO: Pod "pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035714794s
Feb  3 22:06:40.567: INFO: Pod "pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568": Phase="Pending", Reason="", readiness=false. Elapsed: 9.130621789s
Feb  3 22:06:42.579: INFO: Pod "pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.142693659s
STEP: Saw pod success
Feb  3 22:06:42.579: INFO: Pod "pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568" satisfied condition "success or failure"
Feb  3 22:06:42.598: INFO: Trying to get logs from node jerma-node pod pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568 container secret-volume-test: 
STEP: delete the pod
Feb  3 22:06:43.459: INFO: Waiting for pod pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568 to disappear
Feb  3 22:06:43.469: INFO: Pod pod-secrets-192254a2-dcad-4eec-84cb-2d3b09a59568 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:06:43.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2981" for this suite.

• [SLOW TEST:12.166 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2906,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:06:43.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 22:06:44.316: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 22:06:46.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:06:48.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:06:50.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:06:52.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364404, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 22:06:55.380: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:06:55.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7074-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:06:56.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9366" for this suite.
STEP: Destroying namespace "webhook-9366-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.391 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":178,"skipped":2942,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:06:56.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Feb  3 22:07:05.069: INFO: Pod pod-hostip-4ad5a116-48fe-4cb9-a582-1bfe50438844 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:07:05.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9029" for this suite.

• [SLOW TEST:8.204 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2976,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:07:05.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-d1d6772b-49c7-4d05-baa9-e3ca4deab5c5
STEP: Creating a pod to test consume configMaps
Feb  3 22:07:05.209: INFO: Waiting up to 5m0s for pod "pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15" in namespace "configmap-1622" to be "success or failure"
Feb  3 22:07:05.224: INFO: Pod "pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15": Phase="Pending", Reason="", readiness=false. Elapsed: 14.270481ms
Feb  3 22:07:07.235: INFO: Pod "pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026118842s
Feb  3 22:07:09.243: INFO: Pod "pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033388216s
Feb  3 22:07:11.249: INFO: Pod "pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039821373s
Feb  3 22:07:13.260: INFO: Pod "pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050672346s
Feb  3 22:07:15.267: INFO: Pod "pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057879514s
STEP: Saw pod success
Feb  3 22:07:15.267: INFO: Pod "pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15" satisfied condition "success or failure"
Feb  3 22:07:15.273: INFO: Trying to get logs from node jerma-node pod pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15 container configmap-volume-test: 
STEP: delete the pod
Feb  3 22:07:15.312: INFO: Waiting for pod pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15 to disappear
Feb  3 22:07:15.320: INFO: Pod pod-configmaps-682e312b-6136-4e85-bc6d-31fd913d1d15 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:07:15.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1622" for this suite.

• [SLOW TEST:10.256 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2981,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:07:15.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Feb  3 22:07:15.460: INFO: >>> kubeConfig: /root/.kube/config
Feb  3 22:07:19.466: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:07:33.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5380" for this suite.

• [SLOW TEST:17.870 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":181,"skipped":2992,"failed":0}
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:07:33.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:07:41.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1011" for this suite.

• [SLOW TEST:8.184 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2992,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:07:41.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Feb  3 22:07:41.473: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:07:41.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2480" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":183,"skipped":2997,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:07:41.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Feb  3 22:07:41.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9604'
Feb  3 22:07:42.244: INFO: stderr: ""
Feb  3 22:07:42.244: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb  3 22:07:43.284: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:07:43.285: INFO: Found 0 / 1
Feb  3 22:07:44.377: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:07:44.378: INFO: Found 0 / 1
Feb  3 22:07:45.251: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:07:45.251: INFO: Found 0 / 1
Feb  3 22:07:46.252: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:07:46.253: INFO: Found 0 / 1
Feb  3 22:07:47.252: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:07:47.252: INFO: Found 0 / 1
Feb  3 22:07:48.249: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:07:48.249: INFO: Found 0 / 1
Feb  3 22:07:49.250: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:07:49.250: INFO: Found 1 / 1
Feb  3 22:07:49.250: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  3 22:07:49.256: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:07:49.256: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  3 22:07:49.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-vbdnz --namespace=kubectl-9604 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  3 22:07:49.386: INFO: stderr: ""
Feb  3 22:07:49.386: INFO: stdout: "pod/agnhost-master-vbdnz patched\n"
STEP: checking annotations
Feb  3 22:07:49.391: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:07:49.391: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:07:49.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9604" for this suite.

• [SLOW TEST:7.809 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":184,"skipped":3007,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:07:49.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-caa10bfb-f284-4468-af8f-6287afe15b54 in namespace container-probe-2014
Feb  3 22:07:57.553: INFO: Started pod busybox-caa10bfb-f284-4468-af8f-6287afe15b54 in namespace container-probe-2014
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 22:07:57.568: INFO: Initial restart count of pod busybox-caa10bfb-f284-4468-af8f-6287afe15b54 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:11:58.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2014" for this suite.

• [SLOW TEST:249.421 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3045,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:11:58.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:11:58.940: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:12:05.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7490" for this suite.

• [SLOW TEST:6.848 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":186,"skipped":3046,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:12:05.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:12:05.814: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-00a611a0-3e2b-4f86-9274-e5d7dff3d772" in namespace "security-context-test-9813" to be "success or failure"
Feb  3 22:12:05.873: INFO: Pod "busybox-privileged-false-00a611a0-3e2b-4f86-9274-e5d7dff3d772": Phase="Pending", Reason="", readiness=false. Elapsed: 58.292532ms
Feb  3 22:12:07.882: INFO: Pod "busybox-privileged-false-00a611a0-3e2b-4f86-9274-e5d7dff3d772": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068100812s
Feb  3 22:12:09.940: INFO: Pod "busybox-privileged-false-00a611a0-3e2b-4f86-9274-e5d7dff3d772": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125606604s
Feb  3 22:12:11.954: INFO: Pod "busybox-privileged-false-00a611a0-3e2b-4f86-9274-e5d7dff3d772": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139587637s
Feb  3 22:12:14.105: INFO: Pod "busybox-privileged-false-00a611a0-3e2b-4f86-9274-e5d7dff3d772": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.290519489s
Feb  3 22:12:14.105: INFO: Pod "busybox-privileged-false-00a611a0-3e2b-4f86-9274-e5d7dff3d772" satisfied condition "success or failure"
Feb  3 22:12:14.518: INFO: Got logs for pod "busybox-privileged-false-00a611a0-3e2b-4f86-9274-e5d7dff3d772": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:12:14.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9813" for this suite.

• [SLOW TEST:8.866 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3058,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:12:14.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:12:14.680: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:12:16.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1517" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":188,"skipped":3067,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:12:16.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  3 22:12:16.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4357'
Feb  3 22:12:18.497: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 22:12:18.497: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
Feb  3 22:12:20.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4357'
Feb  3 22:12:20.690: INFO: stderr: ""
Feb  3 22:12:20.690: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:12:20.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4357" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":189,"skipped":3119,"failed":0}
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:12:20.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Feb  3 22:12:20.811: INFO: Waiting up to 5m0s for pod "client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066" in namespace "containers-8272" to be "success or failure"
Feb  3 22:12:20.834: INFO: Pod "client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066": Phase="Pending", Reason="", readiness=false. Elapsed: 21.988459ms
Feb  3 22:12:22.840: INFO: Pod "client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02862096s
Feb  3 22:12:24.847: INFO: Pod "client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03501502s
Feb  3 22:12:26.857: INFO: Pod "client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04496068s
Feb  3 22:12:28.916: INFO: Pod "client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104899874s
Feb  3 22:12:30.924: INFO: Pod "client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112162116s
STEP: Saw pod success
Feb  3 22:12:30.924: INFO: Pod "client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066" satisfied condition "success or failure"
Feb  3 22:12:30.927: INFO: Trying to get logs from node jerma-node pod client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066 container test-container: 
STEP: delete the pod
Feb  3 22:12:30.957: INFO: Waiting for pod client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066 to disappear
Feb  3 22:12:30.964: INFO: Pod client-containers-9a5fbbf9-3724-4683-a612-94772a0a0066 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:12:30.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8272" for this suite.

• [SLOW TEST:10.263 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3125,"failed":0}
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:12:30.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 22:12:31.111: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b" in namespace "downward-api-6358" to be "success or failure"
Feb  3 22:12:31.115: INFO: Pod "downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180684ms
Feb  3 22:12:33.122: INFO: Pod "downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010962089s
Feb  3 22:12:35.129: INFO: Pod "downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017876795s
Feb  3 22:12:37.135: INFO: Pod "downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0238036s
Feb  3 22:12:39.163: INFO: Pod "downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051943627s
STEP: Saw pod success
Feb  3 22:12:39.163: INFO: Pod "downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b" satisfied condition "success or failure"
Feb  3 22:12:39.172: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b container client-container: 
STEP: delete the pod
Feb  3 22:12:39.265: INFO: Waiting for pod downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b to disappear
Feb  3 22:12:39.277: INFO: Pod downwardapi-volume-5b600e0e-d8fd-4f6f-a333-e4abbf3f674b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:12:39.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6358" for this suite.

• [SLOW TEST:8.318 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3125,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:12:39.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  3 22:12:39.538: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6210 /api/v1/namespaces/watch-6210/configmaps/e2e-watch-test-resource-version bbf2ae64-f8b4-4611-a498-bf310849d7da 6211384 0 2020-02-03 22:12:39 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 22:12:39.539: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6210 /api/v1/namespaces/watch-6210/configmaps/e2e-watch-test-resource-version bbf2ae64-f8b4-4611-a498-bf310849d7da 6211385 0 2020-02-03 22:12:39 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:12:39.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6210" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":192,"skipped":3163,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:12:39.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb  3 22:12:49.729: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-160 PodName:pod-sharedvolume-97ec2969-1bd7-46bf-8f64-23e8180b3f24 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:12:49.730: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:12:49.809625       8 log.go:172] (0xc00148ed10) (0xc0029b7ea0) Create stream
I0203 22:12:49.809749       8 log.go:172] (0xc00148ed10) (0xc0029b7ea0) Stream added, broadcasting: 1
I0203 22:12:49.815220       8 log.go:172] (0xc00148ed10) Reply frame received for 1
I0203 22:12:49.815319       8 log.go:172] (0xc00148ed10) (0xc0023fa0a0) Create stream
I0203 22:12:49.815346       8 log.go:172] (0xc00148ed10) (0xc0023fa0a0) Stream added, broadcasting: 3
I0203 22:12:49.818632       8 log.go:172] (0xc00148ed10) Reply frame received for 3
I0203 22:12:49.818695       8 log.go:172] (0xc00148ed10) (0xc0017e6aa0) Create stream
I0203 22:12:49.818721       8 log.go:172] (0xc00148ed10) (0xc0017e6aa0) Stream added, broadcasting: 5
I0203 22:12:49.824147       8 log.go:172] (0xc00148ed10) Reply frame received for 5
I0203 22:12:49.956241       8 log.go:172] (0xc00148ed10) Data frame received for 3
I0203 22:12:49.956435       8 log.go:172] (0xc0023fa0a0) (3) Data frame handling
I0203 22:12:49.956478       8 log.go:172] (0xc0023fa0a0) (3) Data frame sent
I0203 22:12:50.050631       8 log.go:172] (0xc00148ed10) (0xc0017e6aa0) Stream removed, broadcasting: 5
I0203 22:12:50.050872       8 log.go:172] (0xc00148ed10) Data frame received for 1
I0203 22:12:50.050981       8 log.go:172] (0xc0029b7ea0) (1) Data frame handling
I0203 22:12:50.051044       8 log.go:172] (0xc0029b7ea0) (1) Data frame sent
I0203 22:12:50.051133       8 log.go:172] (0xc00148ed10) (0xc0023fa0a0) Stream removed, broadcasting: 3
I0203 22:12:50.051297       8 log.go:172] (0xc00148ed10) (0xc0029b7ea0) Stream removed, broadcasting: 1
I0203 22:12:50.051826       8 log.go:172] (0xc00148ed10) (0xc0029b7ea0) Stream removed, broadcasting: 1
I0203 22:12:50.051848       8 log.go:172] (0xc00148ed10) (0xc0023fa0a0) Stream removed, broadcasting: 3
I0203 22:12:50.051872       8 log.go:172] (0xc00148ed10) (0xc0017e6aa0) Stream removed, broadcasting: 5
Feb  3 22:12:50.051: INFO: Exec stderr: ""
I0203 22:12:50.051972       8 log.go:172] (0xc00148ed10) Go away received
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:12:50.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-160" for this suite.

• [SLOW TEST:10.520 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":193,"skipped":3182,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:12:50.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:12:50.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Feb  3 22:12:54.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3344 create -f -'
Feb  3 22:12:57.041: INFO: stderr: ""
Feb  3 22:12:57.041: INFO: stdout: "e2e-test-crd-publish-openapi-9445-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb  3 22:12:57.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3344 delete e2e-test-crd-publish-openapi-9445-crds test-foo'
Feb  3 22:12:57.168: INFO: stderr: ""
Feb  3 22:12:57.168: INFO: stdout: "e2e-test-crd-publish-openapi-9445-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Feb  3 22:12:57.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3344 apply -f -'
Feb  3 22:12:57.586: INFO: stderr: ""
Feb  3 22:12:57.586: INFO: stdout: "e2e-test-crd-publish-openapi-9445-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb  3 22:12:57.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3344 delete e2e-test-crd-publish-openapi-9445-crds test-foo'
Feb  3 22:12:57.838: INFO: stderr: ""
Feb  3 22:12:57.839: INFO: stdout: "e2e-test-crd-publish-openapi-9445-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Feb  3 22:12:57.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3344 create -f -'
Feb  3 22:12:58.362: INFO: rc: 1
Feb  3 22:12:58.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3344 apply -f -'
Feb  3 22:12:58.926: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Feb  3 22:12:58.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3344 create -f -'
Feb  3 22:12:59.379: INFO: rc: 1
Feb  3 22:12:59.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3344 apply -f -'
Feb  3 22:12:59.730: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Feb  3 22:12:59.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9445-crds'
Feb  3 22:13:00.233: INFO: stderr: ""
Feb  3 22:13:00.233: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9445-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Feb  3 22:13:00.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9445-crds.metadata'
Feb  3 22:13:00.786: INFO: stderr: ""
Feb  3 22:13:00.786: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9445-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Feb  3 22:13:00.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9445-crds.spec'
Feb  3 22:13:01.230: INFO: stderr: ""
Feb  3 22:13:01.231: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9445-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb  3 22:13:01.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9445-crds.spec.bars'
Feb  3 22:13:01.735: INFO: stderr: ""
Feb  3 22:13:01.735: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9445-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb  3 22:13:01.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9445-crds.spec.bars2'
Feb  3 22:13:02.259: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:13:06.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3344" for this suite.

• [SLOW TEST:16.015 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":194,"skipped":3222,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:13:06.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:13:12.289: INFO: Waiting up to 5m0s for pod "client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591" in namespace "pods-3095" to be "success or failure"
Feb  3 22:13:12.346: INFO: Pod "client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591": Phase="Pending", Reason="", readiness=false. Elapsed: 56.268044ms
Feb  3 22:13:14.351: INFO: Pod "client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061821568s
Feb  3 22:13:16.359: INFO: Pod "client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069450143s
Feb  3 22:13:18.367: INFO: Pod "client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077064439s
Feb  3 22:13:20.375: INFO: Pod "client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085107845s
STEP: Saw pod success
Feb  3 22:13:20.375: INFO: Pod "client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591" satisfied condition "success or failure"
Feb  3 22:13:20.383: INFO: Trying to get logs from node jerma-node pod client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591 container env3cont: 
STEP: delete the pod
Feb  3 22:13:20.417: INFO: Waiting for pod client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591 to disappear
Feb  3 22:13:20.464: INFO: Pod client-envvars-529b1358-4aa4-49c3-8dd0-3297f1ddf591 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:13:20.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3095" for this suite.

• [SLOW TEST:14.411 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3224,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:13:20.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 22:13:21.346: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 22:13:23.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:13:25.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:13:27.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:13:29.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364801, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 22:13:32.466: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:13:32.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4698" for this suite.
STEP: Destroying namespace "webhook-4698-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.308 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":196,"skipped":3228,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:13:32.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 22:13:33.339: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 22:13:35.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:13:37.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:13:39.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:13:41.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:13:43.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716364813, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 22:13:46.414: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Feb  3 22:13:54.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-6556 to-be-attached-pod -i -c=container1'
Feb  3 22:13:54.749: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:13:54.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6556" for this suite.
STEP: Destroying namespace "webhook-6556-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.141 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":197,"skipped":3228,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:13:54.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  3 22:14:05.183: INFO: &Pod{ObjectMeta:{send-events-a7f928fd-1d7f-4aed-8126-a62e447b1ee7  events-3530 /api/v1/namespaces/events-3530/pods/send-events-a7f928fd-1d7f-4aed-8126-a62e447b1ee7 a7a81e75-3e87-44d4-8411-923f955fc496 6211840 0 2020-02-03 22:13:55 +0000 UTC   map[name:foo time:57563776] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6c87d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6c87d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6c87d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:14:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:14:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:13:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-03 22:13:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 22:14:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://39c53854b34a5541d1b6b22ba391eecff1db9e6dbf1ca67af83b15dbc2a9ee94,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb  3 22:14:07.191: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  3 22:14:09.222: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:14:09.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3530" for this suite.

• [SLOW TEST:14.358 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":198,"skipped":3237,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:14:09.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-392930ee-d55e-4895-98d3-51adfbffaae6
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-392930ee-d55e-4895-98d3-51adfbffaae6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:15:22.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1404" for this suite.

• [SLOW TEST:73.532 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3256,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:15:22.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:15:34.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-841" for this suite.

• [SLOW TEST:11.933 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":200,"skipped":3287,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:15:34.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  3 22:15:34.842: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  3 22:15:34.885: INFO: Waiting for terminating namespaces to be deleted...
Feb  3 22:15:34.893: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  3 22:15:34.903: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  3 22:15:34.903: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  3 22:15:34.903: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  3 22:15:34.903: INFO: 	Container weave ready: true, restart count 1
Feb  3 22:15:34.903: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 22:15:34.903: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  3 22:15:34.958: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  3 22:15:34.959: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  3 22:15:34.959: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  3 22:15:34.959: INFO: 	Container etcd ready: true, restart count 1
Feb  3 22:15:34.959: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  3 22:15:34.959: INFO: 	Container coredns ready: true, restart count 0
Feb  3 22:15:34.959: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  3 22:15:34.959: INFO: 	Container coredns ready: true, restart count 0
Feb  3 22:15:34.959: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  3 22:15:34.959: INFO: 	Container kube-controller-manager ready: true, restart count 3
Feb  3 22:15:34.959: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  3 22:15:34.959: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  3 22:15:34.959: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  3 22:15:34.959: INFO: 	Container weave ready: true, restart count 0
Feb  3 22:15:34.959: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 22:15:34.959: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  3 22:15:34.959: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-ec6c5159-fd33-46f4-9932-82b222b70ada 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-ec6c5159-fd33-46f4-9932-82b222b70ada off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-ec6c5159-fd33-46f4-9932-82b222b70ada
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:20:51.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2073" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:316.559 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":201,"skipped":3290,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:20:51.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:20:51.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9570" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":202,"skipped":3371,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:20:51.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2391
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-2391
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2391
Feb  3 22:20:52.331: INFO: Found 0 stateful pods, waiting for 1
Feb  3 22:21:02.336: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  3 22:21:02.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:21:02.724: INFO: stderr: "I0203 22:21:02.505217    2354 log.go:172] (0xc0008b8000) (0xc0006cde00) Create stream\nI0203 22:21:02.506344    2354 log.go:172] (0xc0008b8000) (0xc0006cde00) Stream added, broadcasting: 1\nI0203 22:21:02.519042    2354 log.go:172] (0xc0008b8000) Reply frame received for 1\nI0203 22:21:02.519190    2354 log.go:172] (0xc0008b8000) (0xc0005ea640) Create stream\nI0203 22:21:02.519213    2354 log.go:172] (0xc0008b8000) (0xc0005ea640) Stream added, broadcasting: 3\nI0203 22:21:02.520863    2354 log.go:172] (0xc0008b8000) Reply frame received for 3\nI0203 22:21:02.520924    2354 log.go:172] (0xc0008b8000) (0xc0003a5400) Create stream\nI0203 22:21:02.520937    2354 log.go:172] (0xc0008b8000) (0xc0003a5400) Stream added, broadcasting: 5\nI0203 22:21:02.522021    2354 log.go:172] (0xc0008b8000) Reply frame received for 5\nI0203 22:21:02.605517    2354 log.go:172] (0xc0008b8000) Data frame received for 5\nI0203 22:21:02.605574    2354 log.go:172] (0xc0003a5400) (5) Data frame handling\nI0203 22:21:02.605600    2354 log.go:172] (0xc0003a5400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:21:02.653890    2354 log.go:172] (0xc0008b8000) Data frame received for 3\nI0203 22:21:02.654039    2354 log.go:172] (0xc0005ea640) (3) Data frame handling\nI0203 22:21:02.654067    2354 log.go:172] (0xc0005ea640) (3) Data frame sent\nI0203 22:21:02.711619    2354 log.go:172] (0xc0008b8000) (0xc0005ea640) Stream removed, broadcasting: 3\nI0203 22:21:02.712005    2354 log.go:172] (0xc0008b8000) Data frame received for 1\nI0203 22:21:02.712033    2354 log.go:172] (0xc0008b8000) (0xc0003a5400) Stream removed, broadcasting: 5\nI0203 22:21:02.712088    2354 log.go:172] (0xc0006cde00) (1) Data frame handling\nI0203 22:21:02.712142    2354 log.go:172] (0xc0006cde00) (1) Data frame sent\nI0203 22:21:02.712158    2354 log.go:172] (0xc0008b8000) (0xc0006cde00) Stream removed, broadcasting: 1\nI0203 22:21:02.712173    2354 log.go:172] (0xc0008b8000) Go away received\nI0203 22:21:02.712861    2354 log.go:172] (0xc0008b8000) (0xc0006cde00) Stream removed, broadcasting: 1\nI0203 22:21:02.712924    2354 log.go:172] (0xc0008b8000) (0xc0005ea640) Stream removed, broadcasting: 3\nI0203 22:21:02.712938    2354 log.go:172] (0xc0008b8000) (0xc0003a5400) Stream removed, broadcasting: 5\n"
Feb  3 22:21:02.724: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:21:02.724: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:21:02.729: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  3 22:21:12.759: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:21:12.759: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:21:12.816: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  3 22:21:12.816: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  }]
Feb  3 22:21:12.816: INFO: ss-1              Pending         []
Feb  3 22:21:12.816: INFO: 
Feb  3 22:21:12.816: INFO: StatefulSet ss has not reached scale 3, at 2
Feb  3 22:21:14.320: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.955526753s
Feb  3 22:21:15.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.451325749s
Feb  3 22:21:16.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.349644208s
Feb  3 22:21:18.731: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.331611887s
Feb  3 22:21:20.286: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.04073288s
Feb  3 22:21:21.292: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.485245091s
Feb  3 22:21:22.298: INFO: Verifying statefulset ss doesn't scale past 3 for another 479.650469ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2391
Feb  3 22:21:23.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:21:23.689: INFO: stderr: "I0203 22:21:23.500937    2369 log.go:172] (0xc000a24dc0) (0xc0008a3f40) Create stream\nI0203 22:21:23.501063    2369 log.go:172] (0xc000a24dc0) (0xc0008a3f40) Stream added, broadcasting: 1\nI0203 22:21:23.503574    2369 log.go:172] (0xc000a24dc0) Reply frame received for 1\nI0203 22:21:23.503605    2369 log.go:172] (0xc000a24dc0) (0xc0009ad720) Create stream\nI0203 22:21:23.503610    2369 log.go:172] (0xc000a24dc0) (0xc0009ad720) Stream added, broadcasting: 3\nI0203 22:21:23.504848    2369 log.go:172] (0xc000a24dc0) Reply frame received for 3\nI0203 22:21:23.504876    2369 log.go:172] (0xc000a24dc0) (0xc0009aa0a0) Create stream\nI0203 22:21:23.504893    2369 log.go:172] (0xc000a24dc0) (0xc0009aa0a0) Stream added, broadcasting: 5\nI0203 22:21:23.507263    2369 log.go:172] (0xc000a24dc0) Reply frame received for 5\nI0203 22:21:23.590767    2369 log.go:172] (0xc000a24dc0) Data frame received for 3\nI0203 22:21:23.590870    2369 log.go:172] (0xc0009ad720) (3) Data frame handling\nI0203 22:21:23.590916    2369 log.go:172] (0xc0009ad720) (3) Data frame sent\nI0203 22:21:23.590981    2369 log.go:172] (0xc000a24dc0) Data frame received for 5\nI0203 22:21:23.590992    2369 log.go:172] (0xc0009aa0a0) (5) Data frame handling\nI0203 22:21:23.591014    2369 log.go:172] (0xc0009aa0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:21:23.677224    2369 log.go:172] (0xc000a24dc0) Data frame received for 1\nI0203 22:21:23.677351    2369 log.go:172] (0xc0008a3f40) (1) Data frame handling\nI0203 22:21:23.677371    2369 log.go:172] (0xc0008a3f40) (1) Data frame sent\nI0203 22:21:23.677878    2369 log.go:172] (0xc000a24dc0) (0xc0008a3f40) Stream removed, broadcasting: 1\nI0203 22:21:23.677981    2369 log.go:172] (0xc000a24dc0) (0xc0009aa0a0) Stream removed, broadcasting: 5\nI0203 22:21:23.678387    2369 log.go:172] (0xc000a24dc0) (0xc0009ad720) Stream removed, broadcasting: 3\nI0203 22:21:23.678806    2369 log.go:172] (0xc000a24dc0) (0xc0008a3f40) Stream removed, broadcasting: 1\nI0203 22:21:23.678863    2369 log.go:172] (0xc000a24dc0) (0xc0009ad720) Stream removed, broadcasting: 3\nI0203 22:21:23.678878    2369 log.go:172] (0xc000a24dc0) (0xc0009aa0a0) Stream removed, broadcasting: 5\n"
Feb  3 22:21:23.689: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:21:23.689: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:21:23.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:21:24.241: INFO: stderr: "I0203 22:21:23.924817    2384 log.go:172] (0xc000a620b0) (0xc000a100a0) Create stream\nI0203 22:21:23.925123    2384 log.go:172] (0xc000a620b0) (0xc000a100a0) Stream added, broadcasting: 1\nI0203 22:21:23.931452    2384 log.go:172] (0xc000a620b0) Reply frame received for 1\nI0203 22:21:23.931578    2384 log.go:172] (0xc000a620b0) (0xc000ad2140) Create stream\nI0203 22:21:23.931618    2384 log.go:172] (0xc000a620b0) (0xc000ad2140) Stream added, broadcasting: 3\nI0203 22:21:23.934056    2384 log.go:172] (0xc000a620b0) Reply frame received for 3\nI0203 22:21:23.934295    2384 log.go:172] (0xc000a620b0) (0xc000a10140) Create stream\nI0203 22:21:23.934322    2384 log.go:172] (0xc000a620b0) (0xc000a10140) Stream added, broadcasting: 5\nI0203 22:21:23.937395    2384 log.go:172] (0xc000a620b0) Reply frame received for 5\nI0203 22:21:24.095833    2384 log.go:172] (0xc000a620b0) Data frame received for 5\nI0203 22:21:24.096012    2384 log.go:172] (0xc000a10140) (5) Data frame handling\nI0203 22:21:24.096042    2384 log.go:172] (0xc000a10140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0203 22:21:24.096085    2384 log.go:172] (0xc000a620b0) Data frame received for 3\nI0203 22:21:24.096111    2384 log.go:172] (0xc000ad2140) (3) Data frame handling\nI0203 22:21:24.096151    2384 log.go:172] (0xc000ad2140) (3) Data frame sent\nI0203 22:21:24.217608    2384 log.go:172] (0xc000a620b0) (0xc000ad2140) Stream removed, broadcasting: 3\nI0203 22:21:24.218870    2384 log.go:172] (0xc000a620b0) Data frame received for 1\nI0203 22:21:24.218989    2384 log.go:172] (0xc000a100a0) (1) Data frame handling\nI0203 22:21:24.219036    2384 log.go:172] (0xc000a100a0) (1) Data frame sent\nI0203 22:21:24.219070    2384 log.go:172] (0xc000a620b0) (0xc000a10140) Stream removed, broadcasting: 5\nI0203 22:21:24.219209    2384 log.go:172] (0xc000a620b0) (0xc000a100a0) Stream removed, broadcasting: 1\nI0203 22:21:24.219602    2384 log.go:172] (0xc000a620b0) Go away received\nI0203 22:21:24.225871    2384 log.go:172] (0xc000a620b0) (0xc000a100a0) Stream removed, broadcasting: 1\nI0203 22:21:24.225995    2384 log.go:172] (0xc000a620b0) (0xc000ad2140) Stream removed, broadcasting: 3\nI0203 22:21:24.226044    2384 log.go:172] (0xc000a620b0) (0xc000a10140) Stream removed, broadcasting: 5\n"
Feb  3 22:21:24.241: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:21:24.241: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:21:24.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:21:24.667: INFO: stderr: "I0203 22:21:24.419968    2406 log.go:172] (0xc0006d4a50) (0xc0006c8000) Create stream\nI0203 22:21:24.420051    2406 log.go:172] (0xc0006d4a50) (0xc0006c8000) Stream added, broadcasting: 1\nI0203 22:21:24.423506    2406 log.go:172] (0xc0006d4a50) Reply frame received for 1\nI0203 22:21:24.423555    2406 log.go:172] (0xc0006d4a50) (0xc00061a000) Create stream\nI0203 22:21:24.423567    2406 log.go:172] (0xc0006d4a50) (0xc00061a000) Stream added, broadcasting: 3\nI0203 22:21:24.425326    2406 log.go:172] (0xc0006d4a50) Reply frame received for 3\nI0203 22:21:24.425348    2406 log.go:172] (0xc0006d4a50) (0xc00061a140) Create stream\nI0203 22:21:24.425359    2406 log.go:172] (0xc0006d4a50) (0xc00061a140) Stream added, broadcasting: 5\nI0203 22:21:24.426418    2406 log.go:172] (0xc0006d4a50) Reply frame received for 5\nI0203 22:21:24.532355    2406 log.go:172] (0xc0006d4a50) Data frame received for 5\nI0203 22:21:24.533016    2406 log.go:172] (0xc0006d4a50) Data frame received for 3\nI0203 22:21:24.533161    2406 log.go:172] (0xc00061a000) (3) Data frame handling\nI0203 22:21:24.533185    2406 log.go:172] (0xc00061a000) (3) Data frame sent\nI0203 22:21:24.533296    2406 log.go:172] (0xc00061a140) (5) Data frame handling\nI0203 22:21:24.533349    2406 log.go:172] (0xc00061a140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0203 22:21:24.647271    2406 log.go:172] (0xc0006d4a50) (0xc00061a000) Stream removed, broadcasting: 3\nI0203 22:21:24.647478    2406 log.go:172] (0xc0006d4a50) Data frame received for 1\nI0203 22:21:24.647547    2406 log.go:172] (0xc0006c8000) (1) Data frame handling\nI0203 22:21:24.647608    2406 log.go:172] (0xc0006c8000) (1) Data frame sent\nI0203 22:21:24.647802    2406 log.go:172] (0xc0006d4a50) (0xc00061a140) Stream removed, broadcasting: 5\nI0203 22:21:24.647857    2406 log.go:172] (0xc0006d4a50) (0xc0006c8000) Stream removed, broadcasting: 1\nI0203 22:21:24.647867    2406 log.go:172] (0xc0006d4a50) Go away received\nI0203 22:21:24.649849    2406 log.go:172] (0xc0006d4a50) (0xc0006c8000) Stream removed, broadcasting: 1\nI0203 22:21:24.649869    2406 log.go:172] (0xc0006d4a50) (0xc00061a000) Stream removed, broadcasting: 3\nI0203 22:21:24.649880    2406 log.go:172] (0xc0006d4a50) (0xc00061a140) Stream removed, broadcasting: 5\n"
Feb  3 22:21:24.667: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:21:24.667: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:21:24.673: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:21:24.674: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:21:24.674: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  3 22:21:24.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:21:25.031: INFO: stderr: "I0203 22:21:24.808783    2427 log.go:172] (0xc000be94a0) (0xc000a10780) Create stream\nI0203 22:21:24.808896    2427 log.go:172] (0xc000be94a0) (0xc000a10780) Stream added, broadcasting: 1\nI0203 22:21:24.813865    2427 log.go:172] (0xc000be94a0) Reply frame received for 1\nI0203 22:21:24.813969    2427 log.go:172] (0xc000be94a0) (0xc0006d3ae0) Create stream\nI0203 22:21:24.813981    2427 log.go:172] (0xc000be94a0) (0xc0006d3ae0) Stream added, broadcasting: 3\nI0203 22:21:24.815210    2427 log.go:172] (0xc000be94a0) Reply frame received for 3\nI0203 22:21:24.815228    2427 log.go:172] (0xc000be94a0) (0xc00060c6e0) Create stream\nI0203 22:21:24.815234    2427 log.go:172] (0xc000be94a0) (0xc00060c6e0) Stream added, broadcasting: 5\nI0203 22:21:24.816190    2427 log.go:172] (0xc000be94a0) Reply frame received for 5\nI0203 22:21:24.872289    2427 log.go:172] (0xc000be94a0) Data frame received for 5\nI0203 22:21:24.872342    2427 log.go:172] (0xc00060c6e0) (5) Data frame handling\nI0203 22:21:24.872361    2427 log.go:172] (0xc00060c6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:21:24.875837    2427 log.go:172] (0xc000be94a0) Data frame received for 3\nI0203 22:21:24.875866    2427 log.go:172] (0xc0006d3ae0) (3) Data frame handling\nI0203 22:21:24.875887    2427 log.go:172] (0xc0006d3ae0) (3) Data frame sent\nI0203 22:21:25.017294    2427 log.go:172] (0xc000be94a0) Data frame received for 1\nI0203 22:21:25.017441    2427 log.go:172] (0xc000be94a0) (0xc0006d3ae0) Stream removed, broadcasting: 3\nI0203 22:21:25.017876    2427 log.go:172] (0xc000a10780) (1) Data frame handling\nI0203 22:21:25.017942    2427 log.go:172] (0xc000a10780) (1) Data frame sent\nI0203 22:21:25.017958    2427 log.go:172] (0xc000be94a0) (0xc00060c6e0) Stream removed, broadcasting: 5\nI0203 22:21:25.018000    2427 log.go:172] (0xc000be94a0) (0xc000a10780) Stream removed, broadcasting: 1\nI0203 22:21:25.018020    2427 log.go:172] (0xc000be94a0) Go away received\nI0203 22:21:25.018888    2427 log.go:172] (0xc000be94a0) (0xc000a10780) Stream removed, broadcasting: 1\nI0203 22:21:25.018898    2427 log.go:172] (0xc000be94a0) (0xc0006d3ae0) Stream removed, broadcasting: 3\nI0203 22:21:25.018902    2427 log.go:172] (0xc000be94a0) (0xc00060c6e0) Stream removed, broadcasting: 5\n"
Feb  3 22:21:25.031: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:21:25.031: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:21:25.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:21:25.449: INFO: stderr: "I0203 22:21:25.221930    2447 log.go:172] (0xc000116370) (0xc000570000) Create stream\nI0203 22:21:25.222194    2447 log.go:172] (0xc000116370) (0xc000570000) Stream added, broadcasting: 1\nI0203 22:21:25.225380    2447 log.go:172] (0xc000116370) Reply frame received for 1\nI0203 22:21:25.225464    2447 log.go:172] (0xc000116370) (0xc00070fd60) Create stream\nI0203 22:21:25.225476    2447 log.go:172] (0xc000116370) (0xc00070fd60) Stream added, broadcasting: 3\nI0203 22:21:25.228588    2447 log.go:172] (0xc000116370) Reply frame received for 3\nI0203 22:21:25.228664    2447 log.go:172] (0xc000116370) (0xc000570140) Create stream\nI0203 22:21:25.228684    2447 log.go:172] (0xc000116370) (0xc000570140) Stream added, broadcasting: 5\nI0203 22:21:25.230404    2447 log.go:172] (0xc000116370) Reply frame received for 5\nI0203 22:21:25.329676    2447 log.go:172] (0xc000116370) Data frame received for 5\nI0203 22:21:25.329722    2447 log.go:172] (0xc000570140) (5) Data frame handling\nI0203 22:21:25.329745    2447 log.go:172] (0xc000570140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:21:25.352696    2447 log.go:172] (0xc000116370) Data frame received for 3\nI0203 22:21:25.352752    2447 log.go:172] (0xc00070fd60) (3) Data frame handling\nI0203 22:21:25.352767    2447 log.go:172] (0xc00070fd60) (3) Data frame sent\nI0203 22:21:25.436370    2447 log.go:172] (0xc000116370) (0xc00070fd60) Stream removed, broadcasting: 3\nI0203 22:21:25.436666    2447 log.go:172] (0xc000116370) Data frame received for 1\nI0203 22:21:25.436718    2447 log.go:172] (0xc000570000) (1) Data frame handling\nI0203 22:21:25.436755    2447 log.go:172] (0xc000570000) (1) Data frame sent\nI0203 22:21:25.436779    2447 log.go:172] (0xc000116370) (0xc000570000) Stream removed, broadcasting: 1\nI0203 22:21:25.436838    2447 log.go:172] (0xc000116370) (0xc000570140) Stream removed, broadcasting: 5\nI0203 22:21:25.436961    2447 log.go:172] (0xc000116370) Go away received\nI0203 22:21:25.437737    2447 log.go:172] (0xc000116370) (0xc000570000) Stream removed, broadcasting: 1\nI0203 22:21:25.437760    2447 log.go:172] (0xc000116370) (0xc00070fd60) Stream removed, broadcasting: 3\nI0203 22:21:25.437786    2447 log.go:172] (0xc000116370) (0xc000570140) Stream removed, broadcasting: 5\n"
Feb  3 22:21:25.450: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:21:25.450: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:21:25.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:21:25.730: INFO: stderr: "I0203 22:21:25.557724    2470 log.go:172] (0xc000903970) (0xc0009d6820) Create stream\nI0203 22:21:25.557837    2470 log.go:172] (0xc000903970) (0xc0009d6820) Stream added, broadcasting: 1\nI0203 22:21:25.562632    2470 log.go:172] (0xc000903970) Reply frame received for 1\nI0203 22:21:25.562667    2470 log.go:172] (0xc000903970) (0xc0006208c0) Create stream\nI0203 22:21:25.562673    2470 log.go:172] (0xc000903970) (0xc0006208c0) Stream added, broadcasting: 3\nI0203 22:21:25.563592    2470 log.go:172] (0xc000903970) Reply frame received for 3\nI0203 22:21:25.563618    2470 log.go:172] (0xc000903970) (0xc000441680) Create stream\nI0203 22:21:25.563628    2470 log.go:172] (0xc000903970) (0xc000441680) Stream added, broadcasting: 5\nI0203 22:21:25.565046    2470 log.go:172] (0xc000903970) Reply frame received for 5\nI0203 22:21:25.640574    2470 log.go:172] (0xc000903970) Data frame received for 5\nI0203 22:21:25.640930    2470 log.go:172] (0xc000441680) (5) Data frame handling\nI0203 22:21:25.641012    2470 log.go:172] (0xc000441680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:21:25.661869    2470 log.go:172] (0xc000903970) Data frame received for 3\nI0203 22:21:25.661885    2470 log.go:172] (0xc0006208c0) (3) Data frame handling\nI0203 22:21:25.661902    2470 log.go:172] (0xc0006208c0) (3) Data frame sent\nI0203 22:21:25.720302    2470 log.go:172] (0xc000903970) Data frame received for 1\nI0203 22:21:25.720453    2470 log.go:172] (0xc0009d6820) (1) Data frame handling\nI0203 22:21:25.720474    2470 log.go:172] (0xc0009d6820) (1) Data frame sent\nI0203 22:21:25.720528    2470 log.go:172] (0xc000903970) (0xc0009d6820) Stream removed, broadcasting: 1\nI0203 22:21:25.720753    2470 log.go:172] (0xc000903970) (0xc0006208c0) Stream removed, broadcasting: 3\nI0203 22:21:25.721001    2470 log.go:172] (0xc000903970) (0xc000441680) Stream removed, broadcasting: 5\nI0203 22:21:25.721039    2470 log.go:172] (0xc000903970) (0xc0009d6820) Stream removed, broadcasting: 1\nI0203 22:21:25.721048    2470 log.go:172] (0xc000903970) (0xc0006208c0) Stream removed, broadcasting: 3\nI0203 22:21:25.721053    2470 log.go:172] (0xc000903970) (0xc000441680) Stream removed, broadcasting: 5\n"
Feb  3 22:21:25.731: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:21:25.731: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:21:25.731: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:21:25.738: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  3 22:21:35.752: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:21:35.752: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:21:35.752: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:21:35.807: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:35.807: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  }]
Feb  3 22:21:35.808: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:35.808: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:35.808: INFO: 
Feb  3 22:21:35.808: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 22:21:37.442: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:37.442: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  }]
Feb  3 22:21:37.442: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:37.442: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:37.442: INFO: 
Feb  3 22:21:37.442: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 22:21:38.453: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:38.453: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  }]
Feb  3 22:21:38.454: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:38.454: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:38.454: INFO: 
Feb  3 22:21:38.454: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 22:21:39.480: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:39.480: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  }]
Feb  3 22:21:39.481: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:39.481: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:39.481: INFO: 
Feb  3 22:21:39.481: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 22:21:40.590: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:40.590: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  }]
Feb  3 22:21:40.591: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:40.591: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:40.591: INFO: 
Feb  3 22:21:40.591: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 22:21:41.627: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:41.627: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:20:52 +0000 UTC  }]
Feb  3 22:21:41.627: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:41.628: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:41.628: INFO: 
Feb  3 22:21:41.628: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 22:21:42.640: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:42.640: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:42.640: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:42.640: INFO: 
Feb  3 22:21:42.640: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  3 22:21:43.647: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:43.647: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:43.647: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:43.647: INFO: 
Feb  3 22:21:43.647: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  3 22:21:44.657: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:44.658: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:44.658: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:44.658: INFO: 
Feb  3 22:21:44.658: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  3 22:21:45.666: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  3 22:21:45.666: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:45.666: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 22:21:12 +0000 UTC  }]
Feb  3 22:21:45.666: INFO: 
Feb  3 22:21:45.666: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2391
Feb  3 22:21:46.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:21:46.934: INFO: rc: 1
Feb  3 22:21:46.934: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Feb  3 22:21:56.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:21:57.105: INFO: rc: 1
Feb  3 22:21:57.105: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:22:07.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:22:07.223: INFO: rc: 1
Feb  3 22:22:07.224: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:22:17.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:22:17.416: INFO: rc: 1
Feb  3 22:22:17.416: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:22:27.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:22:27.604: INFO: rc: 1
Feb  3 22:22:27.604: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:22:37.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:22:37.800: INFO: rc: 1
Feb  3 22:22:37.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:22:47.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:22:48.014: INFO: rc: 1
Feb  3 22:22:48.014: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:22:58.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:23:00.107: INFO: rc: 1
Feb  3 22:23:00.108: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:23:10.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:23:10.302: INFO: rc: 1
Feb  3 22:23:10.302: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:23:20.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:23:20.489: INFO: rc: 1
Feb  3 22:23:20.489: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:23:30.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:23:30.665: INFO: rc: 1
Feb  3 22:23:30.665: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:23:40.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:23:40.796: INFO: rc: 1
Feb  3 22:23:40.796: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:23:50.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:23:50.945: INFO: rc: 1
Feb  3 22:23:50.945: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:24:00.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:24:01.129: INFO: rc: 1
Feb  3 22:24:01.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:24:11.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:24:11.281: INFO: rc: 1
Feb  3 22:24:11.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:24:21.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:24:21.442: INFO: rc: 1
Feb  3 22:24:21.442: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:24:31.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:24:31.612: INFO: rc: 1
Feb  3 22:24:31.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:24:41.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:24:41.818: INFO: rc: 1
Feb  3 22:24:41.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:24:51.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:24:51.981: INFO: rc: 1
Feb  3 22:24:51.981: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:25:01.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:25:02.238: INFO: rc: 1
Feb  3 22:25:02.239: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:25:12.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:25:12.464: INFO: rc: 1
Feb  3 22:25:12.464: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:25:22.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:25:22.682: INFO: rc: 1
Feb  3 22:25:22.683: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:25:32.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:25:32.893: INFO: rc: 1
Feb  3 22:25:32.893: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:25:42.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:25:43.150: INFO: rc: 1
Feb  3 22:25:43.150: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:25:53.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:25:53.351: INFO: rc: 1
Feb  3 22:25:53.351: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:26:03.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:26:03.543: INFO: rc: 1
Feb  3 22:26:03.543: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:26:13.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:26:14.161: INFO: rc: 1
Feb  3 22:26:14.162: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:26:24.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:26:24.291: INFO: rc: 1
Feb  3 22:26:24.292: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:26:34.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:26:34.562: INFO: rc: 1
Feb  3 22:26:34.563: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:26:44.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:26:44.707: INFO: rc: 1
Feb  3 22:26:44.707: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  3 22:26:54.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2391 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:26:54.831: INFO: rc: 1
Feb  3 22:26:54.832: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Feb  3 22:26:54.832: INFO: Scaling statefulset ss to 0
Feb  3 22:26:54.871: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  3 22:26:54.877: INFO: Deleting all statefulset in ns statefulset-2391
Feb  3 22:26:54.882: INFO: Scaling statefulset ss to 0
Feb  3 22:26:54.895: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:26:54.915: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:26:54.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2391" for this suite.

• [SLOW TEST:363.487 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":203,"skipped":3380,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:26:55.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  3 22:27:03.716: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1833 pod-service-account-fb2958ae-bbdb-4601-b252-17070362186d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  3 22:27:04.189: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1833 pod-service-account-fb2958ae-bbdb-4601-b252-17070362186d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  3 22:27:04.520: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1833 pod-service-account-fb2958ae-bbdb-4601-b252-17070362186d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:27:05.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1833" for this suite.

• [SLOW TEST:9.977 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":204,"skipped":3415,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:27:05.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  3 22:27:11.811: INFO: Successfully updated pod "pod-update-903ac7eb-72fd-425a-bd45-a821b93662d1"
STEP: verifying the updated pod is in kubernetes
Feb  3 22:27:11.870: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:27:11.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6369" for this suite.

• [SLOW TEST:6.928 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3418,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:27:11.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  3 22:27:23.278: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:27:23.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4973" for this suite.

• [SLOW TEST:11.392 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3434,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:27:23.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:27:23.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6602" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3474,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:27:23.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:28:24.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2475" for this suite.

• [SLOW TEST:60.245 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3504,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:28:24.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:28:32.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-703" for this suite.

• [SLOW TEST:8.208 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3509,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:28:32.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-2wch
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 22:28:32.378: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2wch" in namespace "subpath-9187" to be "success or failure"
Feb  3 22:28:32.399: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Pending", Reason="", readiness=false. Elapsed: 21.273884ms
Feb  3 22:28:34.408: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030800965s
Feb  3 22:28:36.418: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04036872s
Feb  3 22:28:38.426: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 6.048353756s
Feb  3 22:28:40.435: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 8.057449974s
Feb  3 22:28:42.451: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 10.073650508s
Feb  3 22:28:44.472: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 12.09434162s
Feb  3 22:28:46.481: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 14.103233116s
Feb  3 22:28:48.493: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 16.115245581s
Feb  3 22:28:50.506: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 18.128090791s
Feb  3 22:28:52.517: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 20.139169022s
Feb  3 22:28:54.527: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 22.149416417s
Feb  3 22:28:56.537: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 24.159035443s
Feb  3 22:28:58.544: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Running", Reason="", readiness=true. Elapsed: 26.166451724s
Feb  3 22:29:00.554: INFO: Pod "pod-subpath-test-configmap-2wch": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.17671335s
STEP: Saw pod success
Feb  3 22:29:00.555: INFO: Pod "pod-subpath-test-configmap-2wch" satisfied condition "success or failure"
Feb  3 22:29:00.560: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-2wch container test-container-subpath-configmap-2wch: 
STEP: delete the pod
Feb  3 22:29:00.774: INFO: Waiting for pod pod-subpath-test-configmap-2wch to disappear
Feb  3 22:29:00.823: INFO: Pod pod-subpath-test-configmap-2wch no longer exists
STEP: Deleting pod pod-subpath-test-configmap-2wch
Feb  3 22:29:00.824: INFO: Deleting pod "pod-subpath-test-configmap-2wch" in namespace "subpath-9187"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:29:00.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9187" for this suite.

• [SLOW TEST:28.676 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":210,"skipped":3524,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:29:00.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:29:01.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:29:07.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-337" for this suite.

• [SLOW TEST:6.389 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3549,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:29:07.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 22:29:07.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014" in namespace "downward-api-1987" to be "success or failure"
Feb  3 22:29:07.475: INFO: Pod "downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014": Phase="Pending", Reason="", readiness=false. Elapsed: 9.565671ms
Feb  3 22:29:09.481: INFO: Pod "downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015852883s
Feb  3 22:29:11.523: INFO: Pod "downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057901646s
Feb  3 22:29:13.531: INFO: Pod "downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065586356s
Feb  3 22:29:15.535: INFO: Pod "downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070461013s
Feb  3 22:29:17.548: INFO: Pod "downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082798935s
STEP: Saw pod success
Feb  3 22:29:17.548: INFO: Pod "downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014" satisfied condition "success or failure"
Feb  3 22:29:17.553: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014 container client-container: 
STEP: delete the pod
Feb  3 22:29:17.628: INFO: Waiting for pod downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014 to disappear
Feb  3 22:29:17.685: INFO: Pod downwardapi-volume-ba70d014-9063-426f-9c27-19a873856014 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:29:17.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1987" for this suite.

• [SLOW TEST:10.353 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3571,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:29:17.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  3 22:29:17.863: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  3 22:29:22.870: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:29:22.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7393" for this suite.

• [SLOW TEST:5.363 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":213,"skipped":3618,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:29:23.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 22:29:24.460: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 22:29:26.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:29:28.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:29:30.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:29:32.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716365764, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 22:29:35.539: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:29:35.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8639" for this suite.
STEP: Destroying namespace "webhook-8639-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.824 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":214,"skipped":3627,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:29:35.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb  3 22:29:36.009: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:29:52.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6010" for this suite.

• [SLOW TEST:16.870 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":215,"skipped":3647,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:29:52.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4732
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb  3 22:29:52.976: INFO: Found 0 stateful pods, waiting for 3
Feb  3 22:30:03.009: INFO: Found 2 stateful pods, waiting for 3
Feb  3 22:30:12.985: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:30:12.985: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:30:12.985: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 22:30:22.999: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:30:23.000: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:30:23.000: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb  3 22:30:23.037: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  3 22:30:33.127: INFO: Updating stateful set ss2
Feb  3 22:30:33.145: INFO: Waiting for Pod statefulset-4732/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Feb  3 22:30:43.572: INFO: Found 2 stateful pods, waiting for 3
Feb  3 22:30:53.582: INFO: Found 2 stateful pods, waiting for 3
Feb  3 22:31:03.583: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:31:03.583: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:31:03.583: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Feb  3 22:31:13.620: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:31:13.621: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:31:13.621: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  3 22:31:13.668: INFO: Updating stateful set ss2
Feb  3 22:31:13.722: INFO: Waiting for Pod statefulset-4732/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  3 22:31:24.160: INFO: Updating stateful set ss2
Feb  3 22:31:24.462: INFO: Waiting for StatefulSet statefulset-4732/ss2 to complete update
Feb  3 22:31:24.463: INFO: Waiting for Pod statefulset-4732/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  3 22:31:34.482: INFO: Waiting for StatefulSet statefulset-4732/ss2 to complete update
Feb  3 22:31:34.482: INFO: Waiting for Pod statefulset-4732/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  3 22:31:44.479: INFO: Deleting all statefulset in ns statefulset-4732
Feb  3 22:31:44.483: INFO: Scaling statefulset ss2 to 0
Feb  3 22:32:14.544: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:32:14.549: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:32:14.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4732" for this suite.

• [SLOW TEST:141.894 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":216,"skipped":3662,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:32:14.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-f39ae63b-b266-49f9-b98a-2547eac332ff
STEP: Creating a pod to test consume configMaps
Feb  3 22:32:14.919: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22" in namespace "configmap-6896" to be "success or failure"
Feb  3 22:32:14.966: INFO: Pod "pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22": Phase="Pending", Reason="", readiness=false. Elapsed: 46.661457ms
Feb  3 22:32:16.971: INFO: Pod "pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052101525s
Feb  3 22:32:18.978: INFO: Pod "pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058660442s
Feb  3 22:32:20.988: INFO: Pod "pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068772996s
Feb  3 22:32:22.996: INFO: Pod "pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076648561s
Feb  3 22:32:25.002: INFO: Pod "pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082374168s
STEP: Saw pod success
Feb  3 22:32:25.002: INFO: Pod "pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22" satisfied condition "success or failure"
Feb  3 22:32:25.005: INFO: Trying to get logs from node jerma-node pod pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22 container configmap-volume-test: 
STEP: delete the pod
Feb  3 22:32:25.079: INFO: Waiting for pod pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22 to disappear
Feb  3 22:32:25.083: INFO: Pod pod-configmaps-bf56c1d8-289d-4047-9ae5-31895c4b9c22 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:32:25.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6896" for this suite.

• [SLOW TEST:10.438 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3668,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:32:25.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  3 22:32:25.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2095'
Feb  3 22:32:25.518: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 22:32:25.519: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Feb  3 22:32:25.524: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  3 22:32:25.621: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  3 22:32:25.665: INFO: scanned /root for discovery docs: 
Feb  3 22:32:25.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2095'
Feb  3 22:32:48.945: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  3 22:32:48.945: INFO: stdout: "Created e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59\nScaling up e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Feb  3 22:32:48.945: INFO: stdout: "Created e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59\nScaling up e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Feb  3 22:32:48.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2095'
Feb  3 22:32:49.155: INFO: stderr: ""
Feb  3 22:32:49.155: INFO: stdout: "e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59-4t4nm "
Feb  3 22:32:49.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59-4t4nm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2095'
Feb  3 22:32:49.233: INFO: stderr: ""
Feb  3 22:32:49.233: INFO: stdout: "true"
Feb  3 22:32:49.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59-4t4nm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2095'
Feb  3 22:32:49.371: INFO: stderr: ""
Feb  3 22:32:49.371: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb  3 22:32:49.371: INFO: e2e-test-httpd-rc-f1a450258dd660a1732c69da5c38da59-4t4nm is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Feb  3 22:32:49.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2095'
Feb  3 22:32:49.503: INFO: stderr: ""
Feb  3 22:32:49.503: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:32:49.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2095" for this suite.

• [SLOW TEST:24.435 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":218,"skipped":3676,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:32:49.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0203 22:33:31.620833       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 22:33:31.620: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:33:31.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5599" for this suite.

• [SLOW TEST:42.103 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":219,"skipped":3743,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:33:31.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3411
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb  3 22:33:31.828: INFO: Found 0 stateful pods, waiting for 3
Feb  3 22:33:42.442: INFO: Found 1 stateful pods, waiting for 3
Feb  3 22:33:52.260: INFO: Found 2 stateful pods, waiting for 3
Feb  3 22:34:01.836: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:34:01.836: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:34:01.836: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 22:34:11.839: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:34:11.839: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:34:11.839: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:34:11.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3411 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:34:14.808: INFO: stderr: "I0203 22:34:14.433302    3267 log.go:172] (0xc0008a0bb0) (0xc0006f9ea0) Create stream\nI0203 22:34:14.433535    3267 log.go:172] (0xc0008a0bb0) (0xc0006f9ea0) Stream added, broadcasting: 1\nI0203 22:34:14.437802    3267 log.go:172] (0xc0008a0bb0) Reply frame received for 1\nI0203 22:34:14.438033    3267 log.go:172] (0xc0008a0bb0) (0xc000688780) Create stream\nI0203 22:34:14.438094    3267 log.go:172] (0xc0008a0bb0) (0xc000688780) Stream added, broadcasting: 3\nI0203 22:34:14.444094    3267 log.go:172] (0xc0008a0bb0) Reply frame received for 3\nI0203 22:34:14.444167    3267 log.go:172] (0xc0008a0bb0) (0xc0006f9f40) Create stream\nI0203 22:34:14.444192    3267 log.go:172] (0xc0008a0bb0) (0xc0006f9f40) Stream added, broadcasting: 5\nI0203 22:34:14.446748    3267 log.go:172] (0xc0008a0bb0) Reply frame received for 5\nI0203 22:34:14.558676    3267 log.go:172] (0xc0008a0bb0) Data frame received for 5\nI0203 22:34:14.558849    3267 log.go:172] (0xc0006f9f40) (5) Data frame handling\nI0203 22:34:14.558901    3267 log.go:172] (0xc0006f9f40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:34:14.668019    3267 log.go:172] (0xc0008a0bb0) Data frame received for 3\nI0203 22:34:14.668079    3267 log.go:172] (0xc000688780) (3) Data frame handling\nI0203 22:34:14.668106    3267 log.go:172] (0xc000688780) (3) Data frame sent\nI0203 22:34:14.787284    3267 log.go:172] (0xc0008a0bb0) Data frame received for 1\nI0203 22:34:14.787465    3267 log.go:172] (0xc0008a0bb0) (0xc0006f9f40) Stream removed, broadcasting: 5\nI0203 22:34:14.787547    3267 log.go:172] (0xc0006f9ea0) (1) Data frame handling\nI0203 22:34:14.787597    3267 log.go:172] (0xc0006f9ea0) (1) Data frame sent\nI0203 22:34:14.787811    3267 log.go:172] (0xc0008a0bb0) (0xc000688780) Stream removed, broadcasting: 3\nI0203 22:34:14.787956    3267 log.go:172] (0xc0008a0bb0) (0xc0006f9ea0) Stream removed, broadcasting: 1\nI0203 22:34:14.788007    3267 log.go:172] (0xc0008a0bb0) Go away received\nI0203 22:34:14.789686    3267 log.go:172] (0xc0008a0bb0) (0xc0006f9ea0) Stream removed, broadcasting: 1\nI0203 22:34:14.789747    3267 log.go:172] (0xc0008a0bb0) (0xc000688780) Stream removed, broadcasting: 3\nI0203 22:34:14.789779    3267 log.go:172] (0xc0008a0bb0) (0xc0006f9f40) Stream removed, broadcasting: 5\n"
Feb  3 22:34:14.809: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:34:14.809: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb  3 22:34:14.847: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  3 22:34:24.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3411 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:34:25.281: INFO: stderr: "I0203 22:34:25.118758    3300 log.go:172] (0xc00093d550) (0xc000928780) Create stream\nI0203 22:34:25.118879    3300 log.go:172] (0xc00093d550) (0xc000928780) Stream added, broadcasting: 1\nI0203 22:34:25.126461    3300 log.go:172] (0xc00093d550) Reply frame received for 1\nI0203 22:34:25.126510    3300 log.go:172] (0xc00093d550) (0xc0006d85a0) Create stream\nI0203 22:34:25.126519    3300 log.go:172] (0xc00093d550) (0xc0006d85a0) Stream added, broadcasting: 3\nI0203 22:34:25.127597    3300 log.go:172] (0xc00093d550) Reply frame received for 3\nI0203 22:34:25.127630    3300 log.go:172] (0xc00093d550) (0xc000928000) Create stream\nI0203 22:34:25.127640    3300 log.go:172] (0xc00093d550) (0xc000928000) Stream added, broadcasting: 5\nI0203 22:34:25.129206    3300 log.go:172] (0xc00093d550) Reply frame received for 5\nI0203 22:34:25.195359    3300 log.go:172] (0xc00093d550) Data frame received for 5\nI0203 22:34:25.195441    3300 log.go:172] (0xc000928000) (5) Data frame handling\nI0203 22:34:25.195486    3300 log.go:172] (0xc000928000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:34:25.195735    3300 log.go:172] (0xc00093d550) Data frame received for 3\nI0203 22:34:25.195761    3300 log.go:172] (0xc0006d85a0) (3) Data frame handling\nI0203 22:34:25.195779    3300 log.go:172] (0xc0006d85a0) (3) Data frame sent\nI0203 22:34:25.269492    3300 log.go:172] (0xc00093d550) Data frame received for 1\nI0203 22:34:25.269566    3300 log.go:172] (0xc00093d550) (0xc0006d85a0) Stream removed, broadcasting: 3\nI0203 22:34:25.269791    3300 log.go:172] (0xc000928780) (1) Data frame handling\nI0203 22:34:25.269826    3300 log.go:172] (0xc000928780) (1) Data frame sent\nI0203 22:34:25.269836    3300 log.go:172] (0xc00093d550) (0xc000928780) Stream removed, broadcasting: 1\nI0203 22:34:25.270144    3300 log.go:172] (0xc00093d550) (0xc000928000) Stream removed, broadcasting: 5\nI0203 22:34:25.270205    3300 log.go:172] (0xc00093d550) Go away received\nI0203 22:34:25.271046    3300 log.go:172] (0xc00093d550) (0xc000928780) Stream removed, broadcasting: 1\nI0203 22:34:25.271065    3300 log.go:172] (0xc00093d550) (0xc0006d85a0) Stream removed, broadcasting: 3\nI0203 22:34:25.271085    3300 log.go:172] (0xc00093d550) (0xc000928000) Stream removed, broadcasting: 5\n"
Feb  3 22:34:25.281: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:34:25.281: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:34:35.311: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
Feb  3 22:34:35.312: INFO: Waiting for Pod statefulset-3411/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  3 22:34:35.312: INFO: Waiting for Pod statefulset-3411/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  3 22:34:45.324: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
Feb  3 22:34:45.324: INFO: Waiting for Pod statefulset-3411/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  3 22:34:55.324: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
Feb  3 22:34:55.324: INFO: Waiting for Pod statefulset-3411/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  3 22:35:05.331: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  3 22:35:15.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3411 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:35:15.813: INFO: stderr: "I0203 22:35:15.594212    3320 log.go:172] (0xc0003c0160) (0xc0005ec5a0) Create stream\nI0203 22:35:15.594322    3320 log.go:172] (0xc0003c0160) (0xc0005ec5a0) Stream added, broadcasting: 1\nI0203 22:35:15.598836    3320 log.go:172] (0xc0003c0160) Reply frame received for 1\nI0203 22:35:15.598961    3320 log.go:172] (0xc0003c0160) (0xc00021f360) Create stream\nI0203 22:35:15.599052    3320 log.go:172] (0xc0003c0160) (0xc00021f360) Stream added, broadcasting: 3\nI0203 22:35:15.600232    3320 log.go:172] (0xc0003c0160) Reply frame received for 3\nI0203 22:35:15.600255    3320 log.go:172] (0xc0003c0160) (0xc000667ae0) Create stream\nI0203 22:35:15.600267    3320 log.go:172] (0xc0003c0160) (0xc000667ae0) Stream added, broadcasting: 5\nI0203 22:35:15.601188    3320 log.go:172] (0xc0003c0160) Reply frame received for 5\nI0203 22:35:15.687882    3320 log.go:172] (0xc0003c0160) Data frame received for 5\nI0203 22:35:15.687927    3320 log.go:172] (0xc000667ae0) (5) Data frame handling\nI0203 22:35:15.687949    3320 log.go:172] (0xc000667ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:35:15.731851    3320 log.go:172] (0xc0003c0160) Data frame received for 3\nI0203 22:35:15.731969    3320 log.go:172] (0xc00021f360) (3) Data frame handling\nI0203 22:35:15.732019    3320 log.go:172] (0xc00021f360) (3) Data frame sent\nI0203 22:35:15.804461    3320 log.go:172] (0xc0003c0160) Data frame received for 1\nI0203 22:35:15.804580    3320 log.go:172] (0xc0003c0160) (0xc000667ae0) Stream removed, broadcasting: 5\nI0203 22:35:15.804638    3320 log.go:172] (0xc0005ec5a0) (1) Data frame handling\nI0203 22:35:15.804651    3320 log.go:172] (0xc0005ec5a0) (1) Data frame sent\nI0203 22:35:15.804672    3320 log.go:172] (0xc0003c0160) (0xc00021f360) Stream removed, broadcasting: 3\nI0203 22:35:15.804712    3320 log.go:172] (0xc0003c0160) (0xc0005ec5a0) Stream removed, broadcasting: 1\nI0203 22:35:15.804727    3320 log.go:172] (0xc0003c0160) Go away received\nI0203 22:35:15.805742    3320 log.go:172] (0xc0003c0160) (0xc0005ec5a0) Stream removed, broadcasting: 1\nI0203 22:35:15.805763    3320 log.go:172] (0xc0003c0160) (0xc00021f360) Stream removed, broadcasting: 3\nI0203 22:35:15.805776    3320 log.go:172] (0xc0003c0160) (0xc000667ae0) Stream removed, broadcasting: 5\n"
Feb  3 22:35:15.813: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:35:15.813: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:35:25.860: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  3 22:35:35.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3411 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:35:36.378: INFO: stderr: "I0203 22:35:36.158123    3343 log.go:172] (0xc000a21290) (0xc000a30820) Create stream\nI0203 22:35:36.158299    3343 log.go:172] (0xc000a21290) (0xc000a30820) Stream added, broadcasting: 1\nI0203 22:35:36.189530    3343 log.go:172] (0xc000a21290) Reply frame received for 1\nI0203 22:35:36.189690    3343 log.go:172] (0xc000a21290) (0xc0006c85a0) Create stream\nI0203 22:35:36.189719    3343 log.go:172] (0xc000a21290) (0xc0006c85a0) Stream added, broadcasting: 3\nI0203 22:35:36.191645    3343 log.go:172] (0xc000a21290) Reply frame received for 3\nI0203 22:35:36.191716    3343 log.go:172] (0xc000a21290) (0xc0005af360) Create stream\nI0203 22:35:36.191733    3343 log.go:172] (0xc000a21290) (0xc0005af360) Stream added, broadcasting: 5\nI0203 22:35:36.193570    3343 log.go:172] (0xc000a21290) Reply frame received for 5\nI0203 22:35:36.292537    3343 log.go:172] (0xc000a21290) Data frame received for 3\nI0203 22:35:36.292722    3343 log.go:172] (0xc0006c85a0) (3) Data frame handling\nI0203 22:35:36.292778    3343 log.go:172] (0xc0006c85a0) (3) Data frame sent\nI0203 22:35:36.293012    3343 log.go:172] (0xc000a21290) Data frame received for 5\nI0203 22:35:36.293111    3343 log.go:172] (0xc0005af360) (5) Data frame handling\nI0203 22:35:36.293164    3343 log.go:172] (0xc0005af360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:35:36.369019    3343 log.go:172] (0xc000a21290) Data frame received for 1\nI0203 22:35:36.369205    3343 log.go:172] (0xc000a30820) (1) Data frame handling\nI0203 22:35:36.369230    3343 log.go:172] (0xc000a30820) (1) Data frame sent\nI0203 22:35:36.369281    3343 log.go:172] (0xc000a21290) (0xc000a30820) Stream removed, broadcasting: 1\nI0203 22:35:36.369660    3343 log.go:172] (0xc000a21290) (0xc0006c85a0) Stream removed, broadcasting: 3\nI0203 22:35:36.369740    3343 log.go:172] (0xc000a21290) (0xc0005af360) Stream removed, broadcasting: 5\nI0203 22:35:36.369827    3343 log.go:172] (0xc000a21290) Go away received\nI0203 22:35:36.370228    3343 log.go:172] (0xc000a21290) (0xc000a30820) Stream removed, broadcasting: 1\nI0203 22:35:36.370283    3343 log.go:172] (0xc000a21290) (0xc0006c85a0) Stream removed, broadcasting: 3\nI0203 22:35:36.370297    3343 log.go:172] (0xc000a21290) (0xc0005af360) Stream removed, broadcasting: 5\n"
Feb  3 22:35:36.378: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:35:36.378: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:35:46.404: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
Feb  3 22:35:46.405: INFO: Waiting for Pod statefulset-3411/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  3 22:35:46.405: INFO: Waiting for Pod statefulset-3411/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  3 22:35:46.405: INFO: Waiting for Pod statefulset-3411/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  3 22:35:56.420: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
Feb  3 22:35:56.420: INFO: Waiting for Pod statefulset-3411/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  3 22:35:56.420: INFO: Waiting for Pod statefulset-3411/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  3 22:36:06.421: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
Feb  3 22:36:06.421: INFO: Waiting for Pod statefulset-3411/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  3 22:36:06.421: INFO: Waiting for Pod statefulset-3411/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  3 22:36:16.418: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
Feb  3 22:36:16.418: INFO: Waiting for Pod statefulset-3411/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  3 22:36:26.420: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
Feb  3 22:36:26.420: INFO: Waiting for Pod statefulset-3411/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  3 22:36:36.419: INFO: Waiting for StatefulSet statefulset-3411/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  3 22:36:46.418: INFO: Deleting all statefulset in ns statefulset-3411
Feb  3 22:36:46.424: INFO: Scaling statefulset ss2 to 0
Feb  3 22:37:26.449: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:37:26.454: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:37:26.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3411" for this suite.

• [SLOW TEST:234.875 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":220,"skipped":3743,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:37:26.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  3 22:37:26.748: INFO: Waiting up to 5m0s for pod "pod-a48db050-bd04-4628-8450-f69d90509953" in namespace "emptydir-704" to be "success or failure"
Feb  3 22:37:26.761: INFO: Pod "pod-a48db050-bd04-4628-8450-f69d90509953": Phase="Pending", Reason="", readiness=false. Elapsed: 13.534652ms
Feb  3 22:37:28.775: INFO: Pod "pod-a48db050-bd04-4628-8450-f69d90509953": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027558415s
Feb  3 22:37:30.783: INFO: Pod "pod-a48db050-bd04-4628-8450-f69d90509953": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035544912s
Feb  3 22:37:32.839: INFO: Pod "pod-a48db050-bd04-4628-8450-f69d90509953": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090581562s
Feb  3 22:37:34.850: INFO: Pod "pod-a48db050-bd04-4628-8450-f69d90509953": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101690124s
STEP: Saw pod success
Feb  3 22:37:34.850: INFO: Pod "pod-a48db050-bd04-4628-8450-f69d90509953" satisfied condition "success or failure"
Feb  3 22:37:34.856: INFO: Trying to get logs from node jerma-node pod pod-a48db050-bd04-4628-8450-f69d90509953 container test-container: 
STEP: delete the pod
Feb  3 22:37:34.950: INFO: Waiting for pod pod-a48db050-bd04-4628-8450-f69d90509953 to disappear
Feb  3 22:37:35.096: INFO: Pod pod-a48db050-bd04-4628-8450-f69d90509953 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:37:35.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-704" for this suite.

• [SLOW TEST:8.639 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3749,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:37:35.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Feb  3 22:37:35.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:37:56.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8032" for this suite.

• [SLOW TEST:20.938 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":222,"skipped":3757,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:37:56.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  3 22:37:56.177: INFO: Waiting up to 5m0s for pod "downward-api-27fedd7d-7a91-4774-82b4-5b1a31978b80" in namespace "downward-api-9558" to be "success or failure"
Feb  3 22:37:56.245: INFO: Pod "downward-api-27fedd7d-7a91-4774-82b4-5b1a31978b80": Phase="Pending", Reason="", readiness=false. Elapsed: 67.43044ms
Feb  3 22:37:58.251: INFO: Pod "downward-api-27fedd7d-7a91-4774-82b4-5b1a31978b80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073956262s
Feb  3 22:38:00.259: INFO: Pod "downward-api-27fedd7d-7a91-4774-82b4-5b1a31978b80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081582919s
Feb  3 22:38:02.269: INFO: Pod "downward-api-27fedd7d-7a91-4774-82b4-5b1a31978b80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091684536s
STEP: Saw pod success
Feb  3 22:38:02.269: INFO: Pod "downward-api-27fedd7d-7a91-4774-82b4-5b1a31978b80" satisfied condition "success or failure"
Feb  3 22:38:02.274: INFO: Trying to get logs from node jerma-node pod downward-api-27fedd7d-7a91-4774-82b4-5b1a31978b80 container dapi-container: 
STEP: delete the pod
Feb  3 22:38:02.334: INFO: Waiting for pod downward-api-27fedd7d-7a91-4774-82b4-5b1a31978b80 to disappear
Feb  3 22:38:02.341: INFO: Pod downward-api-27fedd7d-7a91-4774-82b4-5b1a31978b80 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:38:02.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9558" for this suite.

• [SLOW TEST:6.313 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3764,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:38:02.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:38:02.777: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b5da0028-9488-4084-8a5a-497633f83bbc", Controller:(*bool)(0xc003292772), BlockOwnerDeletion:(*bool)(0xc003292773)}}
Feb  3 22:38:02.790: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f8c60e0c-6b9d-4cb7-963d-663a73320339", Controller:(*bool)(0xc0032928fa), BlockOwnerDeletion:(*bool)(0xc0032928fb)}}
Feb  3 22:38:02.884: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"00e532ff-0b88-4278-8e9d-7400e03b1c81", Controller:(*bool)(0xc003232122), BlockOwnerDeletion:(*bool)(0xc003232123)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:38:07.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6261" for this suite.

• [SLOW TEST:5.498 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":224,"skipped":3767,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:38:07.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:38:08.864: INFO: Create a RollingUpdate DaemonSet
Feb  3 22:38:08.961: INFO: Check that daemon pods launch on every node of the cluster
Feb  3 22:38:08.980: INFO: Number of nodes with available pods: 0
Feb  3 22:38:08.980: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:10.000: INFO: Number of nodes with available pods: 0
Feb  3 22:38:10.000: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:11.367: INFO: Number of nodes with available pods: 0
Feb  3 22:38:11.367: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:12.178: INFO: Number of nodes with available pods: 0
Feb  3 22:38:12.178: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:13.028: INFO: Number of nodes with available pods: 0
Feb  3 22:38:13.029: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:13.997: INFO: Number of nodes with available pods: 0
Feb  3 22:38:13.997: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:15.911: INFO: Number of nodes with available pods: 0
Feb  3 22:38:15.911: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:16.822: INFO: Number of nodes with available pods: 0
Feb  3 22:38:16.822: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:16.989: INFO: Number of nodes with available pods: 0
Feb  3 22:38:16.989: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:17.997: INFO: Number of nodes with available pods: 0
Feb  3 22:38:17.997: INFO: Node jerma-node is running more than one daemon pod
Feb  3 22:38:18.995: INFO: Number of nodes with available pods: 2
Feb  3 22:38:18.995: INFO: Number of running nodes: 2, number of available pods: 2
Feb  3 22:38:18.995: INFO: Update the DaemonSet to trigger a rollout
Feb  3 22:38:19.002: INFO: Updating DaemonSet daemon-set
Feb  3 22:38:33.026: INFO: Roll back the DaemonSet before rollout is complete
Feb  3 22:38:33.033: INFO: Updating DaemonSet daemon-set
Feb  3 22:38:33.033: INFO: Make sure DaemonSet rollback is complete
Feb  3 22:38:33.042: INFO: Wrong image for pod: daemon-set-975wx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  3 22:38:33.042: INFO: Pod daemon-set-975wx is not available
Feb  3 22:38:34.055: INFO: Wrong image for pod: daemon-set-975wx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  3 22:38:34.055: INFO: Pod daemon-set-975wx is not available
Feb  3 22:38:35.062: INFO: Wrong image for pod: daemon-set-975wx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  3 22:38:35.062: INFO: Pod daemon-set-975wx is not available
Feb  3 22:38:36.055: INFO: Wrong image for pod: daemon-set-975wx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  3 22:38:36.055: INFO: Pod daemon-set-975wx is not available
Feb  3 22:38:37.056: INFO: Pod daemon-set-qqksv is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5048, will wait for the garbage collector to delete the pods
Feb  3 22:38:37.137: INFO: Deleting DaemonSet.extensions daemon-set took: 10.252836ms
Feb  3 22:38:37.537: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.567515ms
Feb  3 22:38:43.579: INFO: Number of nodes with available pods: 0
Feb  3 22:38:43.579: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 22:38:43.583: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5048/daemonsets","resourceVersion":"6216922"},"items":null}

Feb  3 22:38:43.587: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5048/pods","resourceVersion":"6216922"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:38:43.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5048" for this suite.

• [SLOW TEST:35.703 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":225,"skipped":3774,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:38:43.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Feb  3 22:38:43.673: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Feb  3 22:38:43.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8900'
Feb  3 22:38:44.344: INFO: stderr: ""
Feb  3 22:38:44.344: INFO: stdout: "service/agnhost-slave created\n"
Feb  3 22:38:44.345: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Feb  3 22:38:44.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8900'
Feb  3 22:38:44.826: INFO: stderr: ""
Feb  3 22:38:44.826: INFO: stdout: "service/agnhost-master created\n"
Feb  3 22:38:44.827: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  3 22:38:44.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8900'
Feb  3 22:38:45.224: INFO: stderr: ""
Feb  3 22:38:45.224: INFO: stdout: "service/frontend created\n"
Feb  3 22:38:45.225: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Feb  3 22:38:45.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8900'
Feb  3 22:38:45.589: INFO: stderr: ""
Feb  3 22:38:45.589: INFO: stdout: "deployment.apps/frontend created\n"
Feb  3 22:38:45.590: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  3 22:38:45.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8900'
Feb  3 22:38:46.161: INFO: stderr: ""
Feb  3 22:38:46.161: INFO: stdout: "deployment.apps/agnhost-master created\n"
Feb  3 22:38:46.163: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  3 22:38:46.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8900'
Feb  3 22:38:47.398: INFO: stderr: ""
Feb  3 22:38:47.399: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Feb  3 22:38:47.399: INFO: Waiting for all frontend pods to be Running.
Feb  3 22:39:07.451: INFO: Waiting for frontend to serve content.
Feb  3 22:39:07.536: INFO: Trying to add a new entry to the guestbook.
Feb  3 22:39:07.558: INFO: Verifying that added entry can be retrieved.
Feb  3 22:39:07.579: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Feb  3 22:39:12.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8900'
Feb  3 22:39:13.045: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 22:39:13.045: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 22:39:13.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8900'
Feb  3 22:39:13.358: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 22:39:13.358: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 22:39:13.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8900'
Feb  3 22:39:13.640: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 22:39:13.640: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 22:39:13.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8900'
Feb  3 22:39:13.835: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 22:39:13.835: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 22:39:13.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8900'
Feb  3 22:39:13.977: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 22:39:13.978: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 22:39:13.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8900'
Feb  3 22:39:14.359: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 22:39:14.359: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:39:14.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8900" for this suite.

• [SLOW TEST:30.847 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":226,"skipped":3799,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:39:14.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-42be949c-5ed3-4346-ba4f-67ec85722967
STEP: Creating a pod to test consume configMaps
Feb  3 22:39:17.315: INFO: Waiting up to 5m0s for pod "pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe" in namespace "configmap-3656" to be "success or failure"
Feb  3 22:39:17.468: INFO: Pod "pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe": Phase="Pending", Reason="", readiness=false. Elapsed: 152.673196ms
Feb  3 22:39:20.177: INFO: Pod "pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.862138994s
Feb  3 22:39:22.202: INFO: Pod "pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.886934547s
Feb  3 22:39:24.208: INFO: Pod "pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.893128951s
Feb  3 22:39:26.215: INFO: Pod "pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.899520347s
Feb  3 22:39:28.220: INFO: Pod "pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.905052933s
Feb  3 22:39:30.228: INFO: Pod "pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.912254272s
STEP: Saw pod success
Feb  3 22:39:30.228: INFO: Pod "pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe" satisfied condition "success or failure"
Feb  3 22:39:30.232: INFO: Trying to get logs from node jerma-node pod pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe container configmap-volume-test: 
STEP: delete the pod
Feb  3 22:39:30.281: INFO: Waiting for pod pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe to disappear
Feb  3 22:39:30.370: INFO: Pod pod-configmaps-faf44822-cf65-4f6e-b659-18db3ce217fe no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:39:30.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3656" for this suite.

• [SLOW TEST:15.924 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3808,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:39:30.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb  3 22:39:31.180: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb  3 22:39:33.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:39:35.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:39:37.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366371, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 22:39:40.469: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:39:40.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:39:42.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-10" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.910 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":228,"skipped":3823,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:39:42.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-6925
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-6925
I0203 22:39:42.494097       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6925, replica count: 2
I0203 22:39:45.545836       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:39:48.546753       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:39:51.547581       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:39:54.548415       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 22:39:54.548: INFO: Creating new exec pod
Feb  3 22:40:03.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6925 execpod9l5t7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb  3 22:40:04.113: INFO: stderr: "I0203 22:40:03.890862    3610 log.go:172] (0xc00098cfd0) (0xc000b7a3c0) Create stream\nI0203 22:40:03.891110    3610 log.go:172] (0xc00098cfd0) (0xc000b7a3c0) Stream added, broadcasting: 1\nI0203 22:40:03.895188    3610 log.go:172] (0xc00098cfd0) Reply frame received for 1\nI0203 22:40:03.895297    3610 log.go:172] (0xc00098cfd0) (0xc00097c1e0) Create stream\nI0203 22:40:03.895356    3610 log.go:172] (0xc00098cfd0) (0xc00097c1e0) Stream added, broadcasting: 3\nI0203 22:40:03.897439    3610 log.go:172] (0xc00098cfd0) Reply frame received for 3\nI0203 22:40:03.897470    3610 log.go:172] (0xc00098cfd0) (0xc00097c280) Create stream\nI0203 22:40:03.897479    3610 log.go:172] (0xc00098cfd0) (0xc00097c280) Stream added, broadcasting: 5\nI0203 22:40:03.899932    3610 log.go:172] (0xc00098cfd0) Reply frame received for 5\nI0203 22:40:03.993105    3610 log.go:172] (0xc00098cfd0) Data frame received for 5\nI0203 22:40:03.993297    3610 log.go:172] (0xc00097c280) (5) Data frame handling\nI0203 22:40:03.993365    3610 log.go:172] (0xc00097c280) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0203 22:40:04.000726    3610 log.go:172] (0xc00098cfd0) Data frame received for 5\nI0203 22:40:04.000746    3610 log.go:172] (0xc00097c280) (5) Data frame handling\nI0203 22:40:04.000760    3610 log.go:172] (0xc00097c280) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0203 22:40:04.097227    3610 log.go:172] (0xc00098cfd0) Data frame received for 1\nI0203 22:40:04.097484    3610 log.go:172] (0xc00098cfd0) (0xc00097c1e0) Stream removed, broadcasting: 3\nI0203 22:40:04.097705    3610 log.go:172] (0xc00098cfd0) (0xc00097c280) Stream removed, broadcasting: 5\nI0203 22:40:04.097847    3610 log.go:172] (0xc000b7a3c0) (1) Data frame handling\nI0203 22:40:04.097936    3610 log.go:172] (0xc000b7a3c0) (1) Data frame sent\nI0203 22:40:04.097955    3610 log.go:172] (0xc00098cfd0) (0xc000b7a3c0) Stream removed, broadcasting: 1\nI0203 22:40:04.097986    3610 log.go:172] (0xc00098cfd0) Go away received\nI0203 22:40:04.099312    3610 log.go:172] (0xc00098cfd0) (0xc000b7a3c0) Stream removed, broadcasting: 1\nI0203 22:40:04.099338    3610 log.go:172] (0xc00098cfd0) (0xc00097c1e0) Stream removed, broadcasting: 3\nI0203 22:40:04.099349    3610 log.go:172] (0xc00098cfd0) (0xc00097c280) Stream removed, broadcasting: 5\n"
Feb  3 22:40:04.113: INFO: stdout: ""
Feb  3 22:40:04.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6925 execpod9l5t7 -- /bin/sh -x -c nc -zv -t -w 2 10.96.20.44 80'
Feb  3 22:40:04.482: INFO: stderr: "I0203 22:40:04.275678    3629 log.go:172] (0xc000aa3130) (0xc00064ff40) Create stream\nI0203 22:40:04.275809    3629 log.go:172] (0xc000aa3130) (0xc00064ff40) Stream added, broadcasting: 1\nI0203 22:40:04.278570    3629 log.go:172] (0xc000aa3130) Reply frame received for 1\nI0203 22:40:04.278603    3629 log.go:172] (0xc000aa3130) (0xc000a9a1e0) Create stream\nI0203 22:40:04.278609    3629 log.go:172] (0xc000aa3130) (0xc000a9a1e0) Stream added, broadcasting: 3\nI0203 22:40:04.279540    3629 log.go:172] (0xc000aa3130) Reply frame received for 3\nI0203 22:40:04.279559    3629 log.go:172] (0xc000aa3130) (0xc000a9a280) Create stream\nI0203 22:40:04.279567    3629 log.go:172] (0xc000aa3130) (0xc000a9a280) Stream added, broadcasting: 5\nI0203 22:40:04.280718    3629 log.go:172] (0xc000aa3130) Reply frame received for 5\nI0203 22:40:04.371201    3629 log.go:172] (0xc000aa3130) Data frame received for 5\nI0203 22:40:04.371355    3629 log.go:172] (0xc000a9a280) (5) Data frame handling\nI0203 22:40:04.371401    3629 log.go:172] (0xc000a9a280) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.20.44 80\nI0203 22:40:04.374793    3629 log.go:172] (0xc000aa3130) Data frame received for 5\nI0203 22:40:04.374848    3629 log.go:172] (0xc000a9a280) (5) Data frame handling\nI0203 22:40:04.374865    3629 log.go:172] (0xc000a9a280) (5) Data frame sent\nConnection to 10.96.20.44 80 port [tcp/http] succeeded!\nI0203 22:40:04.461945    3629 log.go:172] (0xc000aa3130) Data frame received for 1\nI0203 22:40:04.462204    3629 log.go:172] (0xc000aa3130) (0xc000a9a1e0) Stream removed, broadcasting: 3\nI0203 22:40:04.462245    3629 log.go:172] (0xc00064ff40) (1) Data frame handling\nI0203 22:40:04.462269    3629 log.go:172] (0xc00064ff40) (1) Data frame sent\nI0203 22:40:04.462359    3629 log.go:172] (0xc000aa3130) (0xc00064ff40) Stream removed, broadcasting: 1\nI0203 22:40:04.463569    3629 log.go:172] (0xc000aa3130) (0xc000a9a280) Stream removed, broadcasting: 5\nI0203 22:40:04.463606    3629 log.go:172] (0xc000aa3130) Go away received\nI0203 22:40:04.463972    3629 log.go:172] (0xc000aa3130) (0xc00064ff40) Stream removed, broadcasting: 1\nI0203 22:40:04.464012    3629 log.go:172] (0xc000aa3130) (0xc000a9a1e0) Stream removed, broadcasting: 3\nI0203 22:40:04.464059    3629 log.go:172] (0xc000aa3130) (0xc000a9a280) Stream removed, broadcasting: 5\n"
Feb  3 22:40:04.483: INFO: stdout: ""
Feb  3 22:40:04.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6925 execpod9l5t7 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30789'
Feb  3 22:40:04.967: INFO: stderr: "I0203 22:40:04.756259    3644 log.go:172] (0xc000a534a0) (0xc000a10820) Create stream\nI0203 22:40:04.756506    3644 log.go:172] (0xc000a534a0) (0xc000a10820) Stream added, broadcasting: 1\nI0203 22:40:04.760046    3644 log.go:172] (0xc000a534a0) Reply frame received for 1\nI0203 22:40:04.760112    3644 log.go:172] (0xc000a534a0) (0xc0009020a0) Create stream\nI0203 22:40:04.760131    3644 log.go:172] (0xc000a534a0) (0xc0009020a0) Stream added, broadcasting: 3\nI0203 22:40:04.761259    3644 log.go:172] (0xc000a534a0) Reply frame received for 3\nI0203 22:40:04.761279    3644 log.go:172] (0xc000a534a0) (0xc000902140) Create stream\nI0203 22:40:04.761284    3644 log.go:172] (0xc000a534a0) (0xc000902140) Stream added, broadcasting: 5\nI0203 22:40:04.763830    3644 log.go:172] (0xc000a534a0) Reply frame received for 5\nI0203 22:40:04.845891    3644 log.go:172] (0xc000a534a0) Data frame received for 5\nI0203 22:40:04.846040    3644 log.go:172] (0xc000902140) (5) Data frame handling\nI0203 22:40:04.846092    3644 log.go:172] (0xc000902140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30789\nI0203 22:40:04.852946    3644 log.go:172] (0xc000a534a0) Data frame received for 5\nI0203 22:40:04.852975    3644 log.go:172] (0xc000902140) (5) Data frame handling\nI0203 22:40:04.852997    3644 log.go:172] (0xc000902140) (5) Data frame sent\nConnection to 10.96.2.250 30789 port [tcp/30789] succeeded!\nI0203 22:40:04.953869    3644 log.go:172] (0xc000a534a0) Data frame received for 1\nI0203 22:40:04.953922    3644 log.go:172] (0xc000a10820) (1) Data frame handling\nI0203 22:40:04.953936    3644 log.go:172] (0xc000a10820) (1) Data frame sent\nI0203 22:40:04.954288    3644 log.go:172] (0xc000a534a0) (0xc0009020a0) Stream removed, broadcasting: 3\nI0203 22:40:04.954359    3644 log.go:172] (0xc000a534a0) (0xc000a10820) Stream removed, broadcasting: 1\nI0203 22:40:04.961232    3644 log.go:172] (0xc000a534a0) (0xc000902140) Stream removed, broadcasting: 5\nI0203 22:40:04.961270    3644 log.go:172] (0xc000a534a0) Go away received\nI0203 22:40:04.961314    3644 log.go:172] (0xc000a534a0) (0xc000a10820) Stream removed, broadcasting: 1\nI0203 22:40:04.961344    3644 log.go:172] (0xc000a534a0) (0xc0009020a0) Stream removed, broadcasting: 3\nI0203 22:40:04.961358    3644 log.go:172] (0xc000a534a0) (0xc000902140) Stream removed, broadcasting: 5\n"
Feb  3 22:40:04.967: INFO: stdout: ""
Feb  3 22:40:04.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6925 execpod9l5t7 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30789'
Feb  3 22:40:05.287: INFO: stderr: "I0203 22:40:05.132217    3662 log.go:172] (0xc000654790) (0xc0008f6000) Create stream\nI0203 22:40:05.132332    3662 log.go:172] (0xc000654790) (0xc0008f6000) Stream added, broadcasting: 1\nI0203 22:40:05.134883    3662 log.go:172] (0xc000654790) Reply frame received for 1\nI0203 22:40:05.134913    3662 log.go:172] (0xc000654790) (0xc0006e1c20) Create stream\nI0203 22:40:05.134943    3662 log.go:172] (0xc000654790) (0xc0006e1c20) Stream added, broadcasting: 3\nI0203 22:40:05.135879    3662 log.go:172] (0xc000654790) Reply frame received for 3\nI0203 22:40:05.135903    3662 log.go:172] (0xc000654790) (0xc0008f60a0) Create stream\nI0203 22:40:05.135911    3662 log.go:172] (0xc000654790) (0xc0008f60a0) Stream added, broadcasting: 5\nI0203 22:40:05.136981    3662 log.go:172] (0xc000654790) Reply frame received for 5\nI0203 22:40:05.207411    3662 log.go:172] (0xc000654790) Data frame received for 5\nI0203 22:40:05.207454    3662 log.go:172] (0xc0008f60a0) (5) Data frame handling\nI0203 22:40:05.207476    3662 log.go:172] (0xc0008f60a0) (5) Data frame sent\nI0203 22:40:05.207486    3662 log.go:172] (0xc000654790) Data frame received for 5\nI0203 22:40:05.207493    3662 log.go:172] (0xc0008f60a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 30789\nI0203 22:40:05.207527    3662 log.go:172] (0xc0008f60a0) (5) Data frame sent\nI0203 22:40:05.211668    3662 log.go:172] (0xc000654790) Data frame received for 5\nI0203 22:40:05.211686    3662 log.go:172] (0xc0008f60a0) (5) Data frame handling\nI0203 22:40:05.211700    3662 log.go:172] (0xc0008f60a0) (5) Data frame sent\nConnection to 10.96.1.234 30789 port [tcp/30789] succeeded!\nI0203 22:40:05.275085    3662 log.go:172] (0xc000654790) Data frame received for 1\nI0203 22:40:05.275165    3662 log.go:172] (0xc000654790) (0xc0006e1c20) Stream removed, broadcasting: 3\nI0203 22:40:05.275221    3662 log.go:172] (0xc0008f6000) (1) Data frame handling\nI0203 22:40:05.275238    3662 log.go:172] (0xc0008f6000) (1) Data frame sent\nI0203 22:40:05.275249    3662 log.go:172] (0xc000654790) (0xc0008f6000) Stream removed, broadcasting: 1\nI0203 22:40:05.275621    3662 log.go:172] (0xc000654790) (0xc0008f60a0) Stream removed, broadcasting: 5\nI0203 22:40:05.275737    3662 log.go:172] (0xc000654790) Go away received\nI0203 22:40:05.275760    3662 log.go:172] (0xc000654790) (0xc0008f6000) Stream removed, broadcasting: 1\nI0203 22:40:05.275774    3662 log.go:172] (0xc000654790) (0xc0006e1c20) Stream removed, broadcasting: 3\nI0203 22:40:05.275783    3662 log.go:172] (0xc000654790) (0xc0008f60a0) Stream removed, broadcasting: 5\n"
Feb  3 22:40:05.287: INFO: stdout: ""
Feb  3 22:40:05.287: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:40:05.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6925" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.073 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":229,"skipped":3829,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:40:05.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-w2r77 in namespace proxy-4649
I0203 22:40:05.613386       8 runners.go:189] Created replication controller with name: proxy-service-w2r77, namespace: proxy-4649, replica count: 1
I0203 22:40:06.664376       8 runners.go:189] proxy-service-w2r77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:07.664996       8 runners.go:189] proxy-service-w2r77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:08.666332       8 runners.go:189] proxy-service-w2r77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:09.667271       8 runners.go:189] proxy-service-w2r77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:10.667863       8 runners.go:189] proxy-service-w2r77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:11.668574       8 runners.go:189] proxy-service-w2r77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:12.669171       8 runners.go:189] proxy-service-w2r77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:13.669807       8 runners.go:189] proxy-service-w2r77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0203 22:40:14.670687       8 runners.go:189] proxy-service-w2r77 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 22:40:14.892: INFO: setup took 9.381973738s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  3 22:40:15.034: INFO: (0) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 141.523904ms)
Feb  3 22:40:15.034: INFO: (0) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 141.981333ms)
Feb  3 22:40:15.034: INFO: (0) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 142.015835ms)
Feb  3 22:40:15.035: INFO: (0) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 142.640162ms)
Feb  3 22:40:15.035: INFO: (0) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 142.678122ms)
Feb  3 22:40:15.035: INFO: (0) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 142.193573ms)
Feb  3 22:40:15.035: INFO: (0) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 140.738643ms)
Feb  3 22:40:15.036: INFO: (0) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 143.300508ms)
Feb  3 22:40:15.036: INFO: (0) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 143.694133ms)
Feb  3 22:40:15.037: INFO: (0) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 144.512067ms)
Feb  3 22:40:15.038: INFO: (0) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 145.392537ms)
Feb  3 22:40:15.048: INFO: (0) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 155.944405ms)
Feb  3 22:40:15.048: INFO: (0) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 153.599004ms)
Feb  3 22:40:15.055: INFO: (0) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test (200; 22.0798ms)
Feb  3 22:40:15.080: INFO: (1) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 22.15091ms)
Feb  3 22:40:15.080: INFO: (1) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 22.724854ms)
Feb  3 22:40:15.080: INFO: (1) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 22.27864ms)
Feb  3 22:40:15.080: INFO: (1) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 22.720335ms)
Feb  3 22:40:15.081: INFO: (1) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 23.768845ms)
Feb  3 22:40:15.085: INFO: (1) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 26.945412ms)
Feb  3 22:40:15.085: INFO: (1) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 26.858432ms)
Feb  3 22:40:15.085: INFO: (1) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 27.028957ms)
Feb  3 22:40:15.085: INFO: (1) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 27.449136ms)
Feb  3 22:40:15.085: INFO: (1) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 28.186329ms)
Feb  3 22:40:15.092: INFO: (1) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 34.42902ms)
Feb  3 22:40:15.093: INFO: (1) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 35.989459ms)
Feb  3 22:40:15.093: INFO: (1) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 36.018545ms)
Feb  3 22:40:15.093: INFO: (1) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 36.129756ms)
Feb  3 22:40:15.093: INFO: (1) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: ... (200; 8.058389ms)
Feb  3 22:40:15.107: INFO: (2) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 13.327311ms)
Feb  3 22:40:15.109: INFO: (2) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 14.931971ms)
Feb  3 22:40:15.109: INFO: (2) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 14.357712ms)
Feb  3 22:40:15.109: INFO: (2) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 15.370511ms)
Feb  3 22:40:15.110: INFO: (2) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 15.627053ms)
Feb  3 22:40:15.110: INFO: (2) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 16.178208ms)
Feb  3 22:40:15.110: INFO: (2) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test<... (200; 7.014134ms)
Feb  3 22:40:15.133: INFO: (3) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 7.164253ms)
Feb  3 22:40:15.133: INFO: (3) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 7.275217ms)
Feb  3 22:40:15.133: INFO: (3) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test (200; 10.62797ms)
Feb  3 22:40:15.138: INFO: (3) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 12.013056ms)
Feb  3 22:40:15.144: INFO: (4) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 6.486939ms)
Feb  3 22:40:15.144: INFO: (4) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 6.765441ms)
Feb  3 22:40:15.149: INFO: (4) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 11.42321ms)
Feb  3 22:40:15.149: INFO: (4) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 11.621545ms)
Feb  3 22:40:15.153: INFO: (4) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 15.542893ms)
Feb  3 22:40:15.155: INFO: (4) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 16.77039ms)
Feb  3 22:40:15.155: INFO: (4) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 16.83653ms)
Feb  3 22:40:15.155: INFO: (4) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 16.900686ms)
Feb  3 22:40:15.155: INFO: (4) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 16.94991ms)
Feb  3 22:40:15.155: INFO: (4) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 17.47506ms)
Feb  3 22:40:15.155: INFO: (4) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 17.648146ms)
Feb  3 22:40:15.155: INFO: (4) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 17.568011ms)
Feb  3 22:40:15.155: INFO: (4) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: ... (200; 19.500229ms)
Feb  3 22:40:15.160: INFO: (4) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 21.888104ms)
Feb  3 22:40:15.173: INFO: (5) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test (200; 15.036221ms)
Feb  3 22:40:15.175: INFO: (5) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 15.087557ms)
Feb  3 22:40:15.175: INFO: (5) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 15.389075ms)
Feb  3 22:40:15.176: INFO: (5) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 16.755277ms)
Feb  3 22:40:15.176: INFO: (5) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 16.835747ms)
Feb  3 22:40:15.176: INFO: (5) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 16.675404ms)
Feb  3 22:40:15.176: INFO: (5) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 16.678915ms)
Feb  3 22:40:15.177: INFO: (5) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 17.370295ms)
Feb  3 22:40:15.177: INFO: (5) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 17.768332ms)
Feb  3 22:40:15.199: INFO: (6) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 21.298645ms)
Feb  3 22:40:15.199: INFO: (6) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 21.581468ms)
Feb  3 22:40:15.199: INFO: (6) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 21.467145ms)
Feb  3 22:40:15.200: INFO: (6) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 22.120001ms)
Feb  3 22:40:15.200: INFO: (6) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 22.521252ms)
Feb  3 22:40:15.201: INFO: (6) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 23.29033ms)
Feb  3 22:40:15.201: INFO: (6) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 23.52835ms)
Feb  3 22:40:15.201: INFO: (6) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 23.473382ms)
Feb  3 22:40:15.201: INFO: (6) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 23.686233ms)
Feb  3 22:40:15.201: INFO: (6) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test (200; 9.132128ms)
Feb  3 22:40:15.218: INFO: (7) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 10.361699ms)
Feb  3 22:40:15.218: INFO: (7) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 10.107702ms)
Feb  3 22:40:15.218: INFO: (7) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 10.363908ms)
Feb  3 22:40:15.218: INFO: (7) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 10.360287ms)
Feb  3 22:40:15.220: INFO: (7) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 12.552512ms)
Feb  3 22:40:15.220: INFO: (7) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 12.946267ms)
Feb  3 22:40:15.221: INFO: (7) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 13.075654ms)
Feb  3 22:40:15.221: INFO: (7) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 13.500451ms)
Feb  3 22:40:15.221: INFO: (7) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 13.288351ms)
Feb  3 22:40:15.223: INFO: (7) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 15.322127ms)
Feb  3 22:40:15.223: INFO: (7) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 15.287109ms)
Feb  3 22:40:15.223: INFO: (7) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test<... (200; 7.464464ms)
Feb  3 22:40:15.231: INFO: (8) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 7.854626ms)
Feb  3 22:40:15.233: INFO: (8) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 8.919463ms)
Feb  3 22:40:15.233: INFO: (8) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 9.238358ms)
Feb  3 22:40:15.233: INFO: (8) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 9.163915ms)
Feb  3 22:40:15.233: INFO: (8) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 9.766507ms)
Feb  3 22:40:15.233: INFO: (8) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 9.668368ms)
Feb  3 22:40:15.233: INFO: (8) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 9.854993ms)
Feb  3 22:40:15.234: INFO: (8) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 9.988076ms)
Feb  3 22:40:15.234: INFO: (8) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 10.088135ms)
Feb  3 22:40:15.234: INFO: (8) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test (200; 10.15344ms)
Feb  3 22:40:15.234: INFO: (8) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 10.41313ms)
Feb  3 22:40:15.235: INFO: (8) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 10.892122ms)
Feb  3 22:40:15.235: INFO: (8) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 11.373671ms)
Feb  3 22:40:15.240: INFO: (9) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 5.477942ms)
Feb  3 22:40:15.247: INFO: (9) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 11.13792ms)
Feb  3 22:40:15.248: INFO: (9) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 11.574855ms)
Feb  3 22:40:15.248: INFO: (9) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 11.183904ms)
Feb  3 22:40:15.248: INFO: (9) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 11.605572ms)
Feb  3 22:40:15.249: INFO: (9) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 12.94473ms)
Feb  3 22:40:15.250: INFO: (9) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 14.021782ms)
Feb  3 22:40:15.250: INFO: (9) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: ... (200; 14.85438ms)
Feb  3 22:40:15.251: INFO: (9) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 14.683261ms)
Feb  3 22:40:15.252: INFO: (9) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 16.288391ms)
Feb  3 22:40:15.252: INFO: (9) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 16.316269ms)
Feb  3 22:40:15.258: INFO: (10) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 5.438423ms)
Feb  3 22:40:15.260: INFO: (10) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 7.145226ms)
Feb  3 22:40:15.260: INFO: (10) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 7.118518ms)
Feb  3 22:40:15.261: INFO: (10) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 8.141022ms)
Feb  3 22:40:15.262: INFO: (10) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 10.095169ms)
Feb  3 22:40:15.262: INFO: (10) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 10.013255ms)
Feb  3 22:40:15.263: INFO: (10) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 10.223422ms)
Feb  3 22:40:15.263: INFO: (10) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 10.669357ms)
Feb  3 22:40:15.264: INFO: (10) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 11.379501ms)
Feb  3 22:40:15.264: INFO: (10) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 11.444507ms)
Feb  3 22:40:15.264: INFO: (10) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 11.775882ms)
Feb  3 22:40:15.264: INFO: (10) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 11.880145ms)
Feb  3 22:40:15.265: INFO: (10) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 12.131553ms)
Feb  3 22:40:15.265: INFO: (10) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 13.030999ms)
Feb  3 22:40:15.266: INFO: (10) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 13.660843ms)
Feb  3 22:40:15.309: INFO: (10) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test (200; 20.392042ms)
Feb  3 22:40:15.331: INFO: (11) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 20.709954ms)
Feb  3 22:40:15.331: INFO: (11) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 20.426434ms)
Feb  3 22:40:15.331: INFO: (11) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 20.922004ms)
Feb  3 22:40:15.331: INFO: (11) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 21.290797ms)
Feb  3 22:40:15.331: INFO: (11) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 22.186384ms)
Feb  3 22:40:15.332: INFO: (11) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 22.195401ms)
Feb  3 22:40:15.332: INFO: (11) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test<... (200; 22.431763ms)
Feb  3 22:40:15.332: INFO: (11) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 22.120473ms)
Feb  3 22:40:15.332: INFO: (11) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 23.000978ms)
Feb  3 22:40:15.332: INFO: (11) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 22.540813ms)
Feb  3 22:40:15.333: INFO: (11) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 22.635699ms)
Feb  3 22:40:15.333: INFO: (11) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 24.098309ms)
Feb  3 22:40:15.333: INFO: (11) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 23.331005ms)
Feb  3 22:40:15.340: INFO: (12) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 7.168523ms)
Feb  3 22:40:15.341: INFO: (12) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 7.901092ms)
Feb  3 22:40:15.341: INFO: (12) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 7.864337ms)
Feb  3 22:40:15.341: INFO: (12) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: ... (200; 13.494193ms)
Feb  3 22:40:15.347: INFO: (12) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 13.859752ms)
Feb  3 22:40:15.347: INFO: (12) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 13.764377ms)
Feb  3 22:40:15.347: INFO: (12) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 13.829513ms)
Feb  3 22:40:15.347: INFO: (12) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 14.065266ms)
Feb  3 22:40:15.350: INFO: (12) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 16.394946ms)
Feb  3 22:40:15.350: INFO: (12) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 16.642162ms)
Feb  3 22:40:15.350: INFO: (12) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 16.637781ms)
Feb  3 22:40:15.351: INFO: (12) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 17.124573ms)
Feb  3 22:40:15.353: INFO: (12) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 19.877815ms)
Feb  3 22:40:15.360: INFO: (13) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 6.748597ms)
Feb  3 22:40:15.361: INFO: (13) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 7.577459ms)
Feb  3 22:40:15.361: INFO: (13) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 7.766566ms)
Feb  3 22:40:15.362: INFO: (13) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test<... (200; 11.622922ms)
Feb  3 22:40:15.367: INFO: (13) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 13.471863ms)
Feb  3 22:40:15.367: INFO: (13) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 13.485635ms)
Feb  3 22:40:15.368: INFO: (13) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 13.947424ms)
Feb  3 22:40:15.368: INFO: (13) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 13.736695ms)
Feb  3 22:40:15.368: INFO: (13) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 13.782967ms)
Feb  3 22:40:15.368: INFO: (13) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 13.972399ms)
Feb  3 22:40:15.370: INFO: (13) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 16.101651ms)
Feb  3 22:40:15.370: INFO: (13) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 16.127277ms)
Feb  3 22:40:15.381: INFO: (14) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 10.980422ms)
Feb  3 22:40:15.383: INFO: (14) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 12.679273ms)
Feb  3 22:40:15.384: INFO: (14) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 14.205939ms)
Feb  3 22:40:15.385: INFO: (14) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test (200; 17.155719ms)
Feb  3 22:40:15.387: INFO: (14) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 17.206215ms)
Feb  3 22:40:15.400: INFO: (15) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: ... (200; 16.11868ms)
Feb  3 22:40:15.404: INFO: (15) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 16.339223ms)
Feb  3 22:40:15.404: INFO: (15) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 16.253254ms)
Feb  3 22:40:15.404: INFO: (15) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 16.982507ms)
Feb  3 22:40:15.404: INFO: (15) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 17.222034ms)
Feb  3 22:40:15.405: INFO: (15) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 17.813524ms)
Feb  3 22:40:15.412: INFO: (15) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 24.554294ms)
Feb  3 22:40:15.412: INFO: (15) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 24.700621ms)
Feb  3 22:40:15.412: INFO: (15) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 25.080859ms)
Feb  3 22:40:15.413: INFO: (15) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 25.161734ms)
Feb  3 22:40:15.413: INFO: (15) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 25.232258ms)
Feb  3 22:40:15.413: INFO: (15) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 25.453877ms)
Feb  3 22:40:15.418: INFO: (16) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 4.585621ms)
Feb  3 22:40:15.418: INFO: (16) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 4.575513ms)
Feb  3 22:40:15.431: INFO: (16) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 17.875143ms)
Feb  3 22:40:15.432: INFO: (16) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 19.186483ms)
Feb  3 22:40:15.440: INFO: (16) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 27.405927ms)
Feb  3 22:40:15.441: INFO: (16) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 27.842652ms)
Feb  3 22:40:15.442: INFO: (16) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 28.607681ms)
Feb  3 22:40:15.442: INFO: (16) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 28.619064ms)
Feb  3 22:40:15.442: INFO: (16) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 28.620879ms)
Feb  3 22:40:15.442: INFO: (16) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 28.607267ms)
Feb  3 22:40:15.443: INFO: (16) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 29.795377ms)
Feb  3 22:40:15.443: INFO: (16) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 29.848017ms)
Feb  3 22:40:15.443: INFO: (16) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 29.503514ms)
Feb  3 22:40:15.449: INFO: (16) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 35.753659ms)
Feb  3 22:40:15.449: INFO: (16) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 35.599487ms)
Feb  3 22:40:15.449: INFO: (16) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test (200; 20.258917ms)
Feb  3 22:40:15.470: INFO: (17) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 20.481117ms)
Feb  3 22:40:15.471: INFO: (17) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 21.6325ms)
Feb  3 22:40:15.472: INFO: (17) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 21.762915ms)
Feb  3 22:40:15.474: INFO: (17) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 24.226984ms)
Feb  3 22:40:15.474: INFO: (17) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 25.012811ms)
Feb  3 22:40:15.475: INFO: (17) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 25.597298ms)
Feb  3 22:40:15.477: INFO: (17) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 27.707904ms)
Feb  3 22:40:15.478: INFO: (17) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 28.139188ms)
Feb  3 22:40:15.478: INFO: (17) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: test (200; 18.563706ms)
Feb  3 22:40:15.500: INFO: (18) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:1080/proxy/: ... (200; 19.19525ms)
Feb  3 22:40:15.500: INFO: (18) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 18.790601ms)
Feb  3 22:40:15.500: INFO: (18) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 19.298827ms)
Feb  3 22:40:15.500: INFO: (18) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 19.444668ms)
Feb  3 22:40:15.502: INFO: (18) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 21.326713ms)
Feb  3 22:40:15.502: INFO: (18) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 21.260037ms)
Feb  3 22:40:15.502: INFO: (18) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:443/proxy/: ... (200; 29.111054ms)
Feb  3 22:40:15.537: INFO: (19) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:460/proxy/: tls baz (200; 29.267029ms)
Feb  3 22:40:15.537: INFO: (19) /api/v1/namespaces/proxy-4649/pods/https:proxy-service-w2r77-wsdjq:462/proxy/: tls qux (200; 29.584113ms)
Feb  3 22:40:15.537: INFO: (19) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq/proxy/: test (200; 29.341155ms)
Feb  3 22:40:15.538: INFO: (19) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 29.937277ms)
Feb  3 22:40:15.538: INFO: (19) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 30.954898ms)
Feb  3 22:40:15.538: INFO: (19) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:1080/proxy/: test<... (200; 31.329159ms)
Feb  3 22:40:15.539: INFO: (19) /api/v1/namespaces/proxy-4649/pods/proxy-service-w2r77-wsdjq:162/proxy/: bar (200; 31.093031ms)
Feb  3 22:40:15.539: INFO: (19) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname1/proxy/: tls baz (200; 31.75315ms)
Feb  3 22:40:15.539: INFO: (19) /api/v1/namespaces/proxy-4649/services/https:proxy-service-w2r77:tlsportname2/proxy/: tls qux (200; 31.010952ms)
Feb  3 22:40:15.539: INFO: (19) /api/v1/namespaces/proxy-4649/pods/http:proxy-service-w2r77-wsdjq:160/proxy/: foo (200; 31.13979ms)
Feb  3 22:40:15.539: INFO: (19) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname1/proxy/: foo (200; 31.083069ms)
Feb  3 22:40:15.539: INFO: (19) /api/v1/namespaces/proxy-4649/services/http:proxy-service-w2r77:portname2/proxy/: bar (200; 31.311957ms)
Feb  3 22:40:15.543: INFO: (19) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname1/proxy/: foo (200; 35.40694ms)
Feb  3 22:40:15.543: INFO: (19) /api/v1/namespaces/proxy-4649/services/proxy-service-w2r77:portname2/proxy/: bar (200; 35.018875ms)
STEP: deleting ReplicationController proxy-service-w2r77 in namespace proxy-4649, will wait for the garbage collector to delete the pods
Feb  3 22:40:15.606: INFO: Deleting ReplicationController proxy-service-w2r77 took: 6.283234ms
Feb  3 22:40:15.806: INFO: Terminating ReplicationController proxy-service-w2r77 pods took: 200.358076ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:40:32.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4649" for this suite.

• [SLOW TEST:27.049 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":230,"skipped":3858,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:40:32.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  3 22:40:32.483: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  3 22:40:32.547: INFO: Waiting for terminating namespaces to be deleted...
Feb  3 22:40:32.552: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  3 22:40:32.580: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  3 22:40:32.581: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  3 22:40:32.581: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  3 22:40:32.581: INFO: 	Container weave ready: true, restart count 1
Feb  3 22:40:32.581: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 22:40:32.581: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  3 22:40:32.619: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  3 22:40:32.619: INFO: 	Container coredns ready: true, restart count 0
Feb  3 22:40:32.619: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  3 22:40:32.619: INFO: 	Container coredns ready: true, restart count 0
Feb  3 22:40:32.619: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  3 22:40:32.619: INFO: 	Container kube-controller-manager ready: true, restart count 3
Feb  3 22:40:32.619: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  3 22:40:32.619: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  3 22:40:32.619: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  3 22:40:32.619: INFO: 	Container weave ready: true, restart count 0
Feb  3 22:40:32.619: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 22:40:32.619: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  3 22:40:32.619: INFO: 	Container kube-scheduler ready: true, restart count 4
Feb  3 22:40:32.619: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  3 22:40:32.619: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  3 22:40:32.619: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  3 22:40:32.619: INFO: 	Container etcd ready: true, restart count 1
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f0059b09b68565], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f0059b0c14f233], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:40:33.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-391" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":231,"skipped":3859,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:40:33.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-4209
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4209
STEP: creating replication controller externalsvc in namespace services-4209
I0203 22:40:34.098145       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4209, replica count: 2
I0203 22:40:37.149039       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:40.149627       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:43.150147       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:40:46.150658       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb  3 22:40:46.234: INFO: Creating new exec pod
Feb  3 22:40:54.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4209 execpodxvrv7 -- /bin/sh -x -c nslookup nodeport-service'
Feb  3 22:40:54.821: INFO: stderr: "I0203 22:40:54.541225    3684 log.go:172] (0xc000106370) (0xc000b1c1e0) Create stream\nI0203 22:40:54.541659    3684 log.go:172] (0xc000106370) (0xc000b1c1e0) Stream added, broadcasting: 1\nI0203 22:40:54.546771    3684 log.go:172] (0xc000106370) Reply frame received for 1\nI0203 22:40:54.546845    3684 log.go:172] (0xc000106370) (0xc000473400) Create stream\nI0203 22:40:54.546882    3684 log.go:172] (0xc000106370) (0xc000473400) Stream added, broadcasting: 3\nI0203 22:40:54.548125    3684 log.go:172] (0xc000106370) Reply frame received for 3\nI0203 22:40:54.548177    3684 log.go:172] (0xc000106370) (0xc0004734a0) Create stream\nI0203 22:40:54.548190    3684 log.go:172] (0xc000106370) (0xc0004734a0) Stream added, broadcasting: 5\nI0203 22:40:54.549377    3684 log.go:172] (0xc000106370) Reply frame received for 5\nI0203 22:40:54.662085    3684 log.go:172] (0xc000106370) Data frame received for 5\nI0203 22:40:54.662218    3684 log.go:172] (0xc0004734a0) (5) Data frame handling\nI0203 22:40:54.662271    3684 log.go:172] (0xc0004734a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0203 22:40:54.683429    3684 log.go:172] (0xc000106370) Data frame received for 3\nI0203 22:40:54.683607    3684 log.go:172] (0xc000473400) (3) Data frame handling\nI0203 22:40:54.683650    3684 log.go:172] (0xc000473400) (3) Data frame sent\nI0203 22:40:54.684400    3684 log.go:172] (0xc000106370) Data frame received for 3\nI0203 22:40:54.684415    3684 log.go:172] (0xc000473400) (3) Data frame handling\nI0203 22:40:54.684429    3684 log.go:172] (0xc000473400) (3) Data frame sent\nI0203 22:40:54.793829    3684 log.go:172] (0xc000106370) Data frame received for 1\nI0203 22:40:54.793940    3684 log.go:172] (0xc000106370) (0xc000473400) Stream removed, broadcasting: 3\nI0203 22:40:54.794037    3684 log.go:172] (0xc000b1c1e0) (1) Data frame handling\nI0203 22:40:54.794101    3684 log.go:172] (0xc000b1c1e0) (1) Data frame sent\nI0203 22:40:54.794155    3684 log.go:172] (0xc000106370) (0xc0004734a0) Stream removed, broadcasting: 5\nI0203 22:40:54.794224    3684 log.go:172] (0xc000106370) (0xc000b1c1e0) Stream removed, broadcasting: 1\nI0203 22:40:54.794244    3684 log.go:172] (0xc000106370) Go away received\nI0203 22:40:54.795520    3684 log.go:172] (0xc000106370) (0xc000b1c1e0) Stream removed, broadcasting: 1\nI0203 22:40:54.795549    3684 log.go:172] (0xc000106370) (0xc000473400) Stream removed, broadcasting: 3\nI0203 22:40:54.795559    3684 log.go:172] (0xc000106370) (0xc0004734a0) Stream removed, broadcasting: 5\n"
Feb  3 22:40:54.821: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4209.svc.cluster.local\tcanonical name = externalsvc.services-4209.svc.cluster.local.\nName:\texternalsvc.services-4209.svc.cluster.local\nAddress: 10.96.107.67\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4209, will wait for the garbage collector to delete the pods
Feb  3 22:40:54.888: INFO: Deleting ReplicationController externalsvc took: 9.397883ms
Feb  3 22:40:55.189: INFO: Terminating ReplicationController externalsvc pods took: 300.610754ms
Feb  3 22:41:13.254: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:41:13.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4209" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:39.715 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":232,"skipped":3871,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:41:13.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 22:41:14.057: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 22:41:16.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:41:18.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:41:20.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366474, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 22:41:23.205: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:41:23.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2831" for this suite.
STEP: Destroying namespace "webhook-2831-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.167 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":233,"skipped":3874,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:41:23.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 22:41:24.214: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 22:41:26.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:41:28.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:41:30.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:41:32.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716366484, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 22:41:35.285: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
Feb  3 22:41:36.968: INFO: Waiting for webhook configuration to be ready...
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:41:48.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4464" for this suite.
STEP: Destroying namespace "webhook-4464-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.800 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":234,"skipped":3881,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:41:48.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-894v
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 22:41:48.524: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-894v" in namespace "subpath-3542" to be "success or failure"
Feb  3 22:41:48.554: INFO: Pod "pod-subpath-test-projected-894v": Phase="Pending", Reason="", readiness=false. Elapsed: 30.14172ms
Feb  3 22:41:50.564: INFO: Pod "pod-subpath-test-projected-894v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039938963s
Feb  3 22:41:52.572: INFO: Pod "pod-subpath-test-projected-894v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048165505s
Feb  3 22:41:54.594: INFO: Pod "pod-subpath-test-projected-894v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070553907s
Feb  3 22:41:56.613: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 8.088835152s
Feb  3 22:41:58.628: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 10.104408145s
Feb  3 22:42:00.647: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 12.123173403s
Feb  3 22:42:02.654: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 14.130365745s
Feb  3 22:42:04.664: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 16.140378072s
Feb  3 22:42:06.674: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 18.150040168s
Feb  3 22:42:08.689: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 20.164901854s
Feb  3 22:42:10.696: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 22.171889901s
Feb  3 22:42:12.701: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 24.177362399s
Feb  3 22:42:14.726: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 26.202656889s
Feb  3 22:42:16.733: INFO: Pod "pod-subpath-test-projected-894v": Phase="Running", Reason="", readiness=true. Elapsed: 28.209318791s
Feb  3 22:42:18.759: INFO: Pod "pod-subpath-test-projected-894v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.235319439s
STEP: Saw pod success
Feb  3 22:42:18.759: INFO: Pod "pod-subpath-test-projected-894v" satisfied condition "success or failure"
Feb  3 22:42:18.811: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-894v container test-container-subpath-projected-894v: 
STEP: delete the pod
Feb  3 22:42:18.897: INFO: Waiting for pod pod-subpath-test-projected-894v to disappear
Feb  3 22:42:18.904: INFO: Pod pod-subpath-test-projected-894v no longer exists
STEP: Deleting pod pod-subpath-test-projected-894v
Feb  3 22:42:18.904: INFO: Deleting pod "pod-subpath-test-projected-894v" in namespace "subpath-3542"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:42:18.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3542" for this suite.

• [SLOW TEST:30.579 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":235,"skipped":3885,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:42:18.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:42:19.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8432
I0203 22:42:19.025251       8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8432, replica count: 1
I0203 22:42:20.075933       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:42:21.076407       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:42:22.076858       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:42:23.077294       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:42:24.077714       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:42:25.078336       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 22:42:25.311: INFO: Created: latency-svc-x8xj2
Feb  3 22:42:25.321: INFO: Got endpoints: latency-svc-x8xj2 [142.080318ms]
Feb  3 22:42:25.358: INFO: Created: latency-svc-s7js8
Feb  3 22:42:25.387: INFO: Got endpoints: latency-svc-s7js8 [66.312863ms]
Feb  3 22:42:25.391: INFO: Created: latency-svc-ck5t5
Feb  3 22:42:25.471: INFO: Got endpoints: latency-svc-ck5t5 [150.252321ms]
Feb  3 22:42:25.474: INFO: Created: latency-svc-5jjp4
Feb  3 22:42:25.536: INFO: Got endpoints: latency-svc-5jjp4 [212.223495ms]
Feb  3 22:42:25.542: INFO: Created: latency-svc-hjsqr
Feb  3 22:42:25.548: INFO: Got endpoints: latency-svc-hjsqr [226.10055ms]
Feb  3 22:42:25.623: INFO: Created: latency-svc-w5nnv
Feb  3 22:42:25.633: INFO: Got endpoints: latency-svc-w5nnv [311.118783ms]
Feb  3 22:42:25.665: INFO: Created: latency-svc-26snt
Feb  3 22:42:25.671: INFO: Got endpoints: latency-svc-26snt [349.527607ms]
Feb  3 22:42:25.774: INFO: Created: latency-svc-mw4mk
Feb  3 22:42:25.789: INFO: Got endpoints: latency-svc-mw4mk [466.765067ms]
Feb  3 22:42:25.811: INFO: Created: latency-svc-hvcxk
Feb  3 22:42:25.812: INFO: Got endpoints: latency-svc-hvcxk [489.712115ms]
Feb  3 22:42:25.843: INFO: Created: latency-svc-6xfjr
Feb  3 22:42:25.853: INFO: Got endpoints: latency-svc-6xfjr [530.964868ms]
Feb  3 22:42:25.873: INFO: Created: latency-svc-v9mjl
Feb  3 22:42:25.937: INFO: Got endpoints: latency-svc-v9mjl [614.028424ms]
Feb  3 22:42:25.961: INFO: Created: latency-svc-z65ss
Feb  3 22:42:25.966: INFO: Got endpoints: latency-svc-z65ss [642.795396ms]
Feb  3 22:42:25.991: INFO: Created: latency-svc-ks767
Feb  3 22:42:26.001: INFO: Got endpoints: latency-svc-ks767 [677.969437ms]
Feb  3 22:42:26.018: INFO: Created: latency-svc-fdjvl
Feb  3 22:42:26.018: INFO: Got endpoints: latency-svc-fdjvl [695.973692ms]
Feb  3 22:42:26.037: INFO: Created: latency-svc-w9j6v
Feb  3 22:42:26.161: INFO: Got endpoints: latency-svc-w9j6v [838.494771ms]
Feb  3 22:42:26.204: INFO: Created: latency-svc-w6fvn
Feb  3 22:42:26.231: INFO: Got endpoints: latency-svc-w6fvn [908.192997ms]
Feb  3 22:42:26.236: INFO: Created: latency-svc-xzbmt
Feb  3 22:42:26.745: INFO: Got endpoints: latency-svc-xzbmt [1.358094907s]
Feb  3 22:42:26.749: INFO: Created: latency-svc-qq848
Feb  3 22:42:26.825: INFO: Got endpoints: latency-svc-qq848 [1.353608766s]
Feb  3 22:42:26.832: INFO: Created: latency-svc-bdslr
Feb  3 22:42:26.835: INFO: Got endpoints: latency-svc-bdslr [1.299338869s]
Feb  3 22:42:26.929: INFO: Created: latency-svc-gwb4q
Feb  3 22:42:26.942: INFO: Got endpoints: latency-svc-gwb4q [1.394466466s]
Feb  3 22:42:26.977: INFO: Created: latency-svc-2zw45
Feb  3 22:42:26.986: INFO: Got endpoints: latency-svc-2zw45 [1.352652552s]
Feb  3 22:42:27.005: INFO: Created: latency-svc-5gpbj
Feb  3 22:42:27.012: INFO: Got endpoints: latency-svc-5gpbj [1.340776037s]
Feb  3 22:42:27.162: INFO: Created: latency-svc-twq8k
Feb  3 22:42:27.192: INFO: Got endpoints: latency-svc-twq8k [1.403049643s]
Feb  3 22:42:27.216: INFO: Created: latency-svc-zhm5x
Feb  3 22:42:27.240: INFO: Got endpoints: latency-svc-zhm5x [1.427968408s]
Feb  3 22:42:27.302: INFO: Created: latency-svc-xzrdf
Feb  3 22:42:27.307: INFO: Got endpoints: latency-svc-xzrdf [1.453046516s]
Feb  3 22:42:27.354: INFO: Created: latency-svc-qgj4s
Feb  3 22:42:27.376: INFO: Got endpoints: latency-svc-qgj4s [1.438519352s]
Feb  3 22:42:27.513: INFO: Created: latency-svc-h98z2
Feb  3 22:42:27.520: INFO: Got endpoints: latency-svc-h98z2 [1.554205824s]
Feb  3 22:42:27.552: INFO: Created: latency-svc-rnnls
Feb  3 22:42:27.559: INFO: Got endpoints: latency-svc-rnnls [1.557972329s]
Feb  3 22:42:27.578: INFO: Created: latency-svc-pwf9q
Feb  3 22:42:27.728: INFO: Got endpoints: latency-svc-pwf9q [1.709496689s]
Feb  3 22:42:27.743: INFO: Created: latency-svc-fh8mf
Feb  3 22:42:27.750: INFO: Got endpoints: latency-svc-fh8mf [1.58853851s]
Feb  3 22:42:27.781: INFO: Created: latency-svc-cvpmp
Feb  3 22:42:27.793: INFO: Got endpoints: latency-svc-cvpmp [1.561084353s]
Feb  3 22:42:27.826: INFO: Created: latency-svc-qt4xr
Feb  3 22:42:27.877: INFO: Got endpoints: latency-svc-qt4xr [1.130903261s]
Feb  3 22:42:28.194: INFO: Created: latency-svc-9w5zw
Feb  3 22:42:28.196: INFO: Got endpoints: latency-svc-9w5zw [1.370276487s]
Feb  3 22:42:28.629: INFO: Created: latency-svc-lddkh
Feb  3 22:42:28.671: INFO: Got endpoints: latency-svc-lddkh [1.835633508s]
Feb  3 22:42:28.673: INFO: Created: latency-svc-wp8fz
Feb  3 22:42:28.690: INFO: Got endpoints: latency-svc-wp8fz [1.747043309s]
Feb  3 22:42:28.835: INFO: Created: latency-svc-4mqfj
Feb  3 22:42:28.864: INFO: Got endpoints: latency-svc-4mqfj [1.877782044s]
Feb  3 22:42:28.869: INFO: Created: latency-svc-jm8xw
Feb  3 22:42:28.872: INFO: Got endpoints: latency-svc-jm8xw [1.85950656s]
Feb  3 22:42:28.901: INFO: Created: latency-svc-59vhm
Feb  3 22:42:28.923: INFO: Got endpoints: latency-svc-59vhm [1.730966042s]
Feb  3 22:42:29.011: INFO: Created: latency-svc-g5pvr
Feb  3 22:42:29.019: INFO: Got endpoints: latency-svc-g5pvr [1.778152909s]
Feb  3 22:42:29.021: INFO: Created: latency-svc-kpr4n
Feb  3 22:42:29.024: INFO: Got endpoints: latency-svc-kpr4n [1.71772941s]
Feb  3 22:42:29.064: INFO: Created: latency-svc-whl67
Feb  3 22:42:29.070: INFO: Got endpoints: latency-svc-whl67 [1.694255497s]
Feb  3 22:42:29.093: INFO: Created: latency-svc-69lzq
Feb  3 22:42:29.164: INFO: Got endpoints: latency-svc-69lzq [1.643060302s]
Feb  3 22:42:29.245: INFO: Created: latency-svc-fk5fx
Feb  3 22:42:29.248: INFO: Got endpoints: latency-svc-fk5fx [1.689125737s]
Feb  3 22:42:29.348: INFO: Created: latency-svc-dv4bh
Feb  3 22:42:29.372: INFO: Got endpoints: latency-svc-dv4bh [1.643686308s]
Feb  3 22:42:29.377: INFO: Created: latency-svc-vktks
Feb  3 22:42:29.388: INFO: Got endpoints: latency-svc-vktks [1.637912847s]
Feb  3 22:42:29.413: INFO: Created: latency-svc-lj2rm
Feb  3 22:42:29.424: INFO: Got endpoints: latency-svc-lj2rm [1.631421776s]
Feb  3 22:42:29.502: INFO: Created: latency-svc-r9k6l
Feb  3 22:42:29.522: INFO: Got endpoints: latency-svc-r9k6l [1.644729464s]
Feb  3 22:42:29.528: INFO: Created: latency-svc-wzwn4
Feb  3 22:42:29.532: INFO: Got endpoints: latency-svc-wzwn4 [1.33602117s]
Feb  3 22:42:29.585: INFO: Created: latency-svc-2c87m
Feb  3 22:42:29.594: INFO: Got endpoints: latency-svc-2c87m [923.36534ms]
Feb  3 22:42:29.716: INFO: Created: latency-svc-9sws8
Feb  3 22:42:29.721: INFO: Got endpoints: latency-svc-9sws8 [1.03147374s]
Feb  3 22:42:29.746: INFO: Created: latency-svc-npxm7
Feb  3 22:42:29.760: INFO: Got endpoints: latency-svc-npxm7 [895.710163ms]
Feb  3 22:42:29.908: INFO: Created: latency-svc-552js
Feb  3 22:42:29.908: INFO: Got endpoints: latency-svc-552js [1.036315226s]
Feb  3 22:42:29.956: INFO: Created: latency-svc-65t9p
Feb  3 22:42:30.018: INFO: Created: latency-svc-p5tnp
Feb  3 22:42:30.021: INFO: Got endpoints: latency-svc-65t9p [1.097564354s]
Feb  3 22:42:30.027: INFO: Got endpoints: latency-svc-p5tnp [1.007736399s]
Feb  3 22:42:30.046: INFO: Created: latency-svc-pmdxd
Feb  3 22:42:30.066: INFO: Got endpoints: latency-svc-pmdxd [1.041798498s]
Feb  3 22:42:30.105: INFO: Created: latency-svc-jzmxm
Feb  3 22:42:30.105: INFO: Got endpoints: latency-svc-jzmxm [1.034924939s]
Feb  3 22:42:30.226: INFO: Created: latency-svc-dfzwj
Feb  3 22:42:30.227: INFO: Got endpoints: latency-svc-dfzwj [1.062860756s]
Feb  3 22:42:30.257: INFO: Created: latency-svc-c4vv8
Feb  3 22:42:30.296: INFO: Got endpoints: latency-svc-c4vv8 [1.048348375s]
Feb  3 22:42:30.316: INFO: Created: latency-svc-xvm2d
Feb  3 22:42:30.319: INFO: Got endpoints: latency-svc-xvm2d [946.711764ms]
Feb  3 22:42:30.371: INFO: Created: latency-svc-lhpzt
Feb  3 22:42:30.379: INFO: Got endpoints: latency-svc-lhpzt [990.501322ms]
Feb  3 22:42:30.397: INFO: Created: latency-svc-k9zpg
Feb  3 22:42:30.421: INFO: Got endpoints: latency-svc-k9zpg [995.811221ms]
Feb  3 22:42:30.439: INFO: Created: latency-svc-k4cgz
Feb  3 22:42:30.445: INFO: Got endpoints: latency-svc-k4cgz [923.182363ms]
Feb  3 22:42:30.512: INFO: Created: latency-svc-6jkgv
Feb  3 22:42:30.519: INFO: Got endpoints: latency-svc-6jkgv [98.179411ms]
Feb  3 22:42:30.594: INFO: Created: latency-svc-qpngn
Feb  3 22:42:30.600: INFO: Got endpoints: latency-svc-qpngn [1.068640556s]
Feb  3 22:42:30.653: INFO: Created: latency-svc-nmmrz
Feb  3 22:42:30.663: INFO: Got endpoints: latency-svc-nmmrz [1.068538666s]
Feb  3 22:42:30.681: INFO: Created: latency-svc-bqw82
Feb  3 22:42:30.704: INFO: Got endpoints: latency-svc-bqw82 [983.085005ms]
Feb  3 22:42:30.737: INFO: Created: latency-svc-4w426
Feb  3 22:42:30.747: INFO: Got endpoints: latency-svc-4w426 [986.713922ms]
Feb  3 22:42:30.863: INFO: Created: latency-svc-259fw
Feb  3 22:42:30.871: INFO: Got endpoints: latency-svc-259fw [961.449604ms]
Feb  3 22:42:30.909: INFO: Created: latency-svc-s22n7
Feb  3 22:42:30.923: INFO: Got endpoints: latency-svc-s22n7 [901.389838ms]
Feb  3 22:42:30.957: INFO: Created: latency-svc-rb5tz
Feb  3 22:42:31.063: INFO: Got endpoints: latency-svc-rb5tz [1.036429062s]
Feb  3 22:42:31.075: INFO: Created: latency-svc-6csgl
Feb  3 22:42:31.089: INFO: Got endpoints: latency-svc-6csgl [1.022622609s]
Feb  3 22:42:31.122: INFO: Created: latency-svc-b8rpp
Feb  3 22:42:31.134: INFO: Got endpoints: latency-svc-b8rpp [1.028566159s]
Feb  3 22:42:31.229: INFO: Created: latency-svc-vckj7
Feb  3 22:42:31.229: INFO: Got endpoints: latency-svc-vckj7 [1.002810737s]
Feb  3 22:42:31.259: INFO: Created: latency-svc-72scw
Feb  3 22:42:31.273: INFO: Got endpoints: latency-svc-72scw [976.668786ms]
Feb  3 22:42:31.359: INFO: Created: latency-svc-hqfgd
Feb  3 22:42:31.393: INFO: Created: latency-svc-fjb6t
Feb  3 22:42:31.394: INFO: Got endpoints: latency-svc-hqfgd [1.074588154s]
Feb  3 22:42:31.422: INFO: Got endpoints: latency-svc-fjb6t [1.042996422s]
Feb  3 22:42:31.509: INFO: Created: latency-svc-4zmr2
Feb  3 22:42:31.520: INFO: Got endpoints: latency-svc-4zmr2 [1.075271363s]
Feb  3 22:42:31.548: INFO: Created: latency-svc-mmswm
Feb  3 22:42:31.555: INFO: Got endpoints: latency-svc-mmswm [1.035626686s]
Feb  3 22:42:31.580: INFO: Created: latency-svc-6lxfv
Feb  3 22:42:31.612: INFO: Got endpoints: latency-svc-6lxfv [1.011424453s]
Feb  3 22:42:31.686: INFO: Created: latency-svc-vvcvw
Feb  3 22:42:31.706: INFO: Got endpoints: latency-svc-vvcvw [1.042672158s]
Feb  3 22:42:31.738: INFO: Created: latency-svc-kgvrk
Feb  3 22:42:31.753: INFO: Got endpoints: latency-svc-kgvrk [1.048593799s]
Feb  3 22:42:31.780: INFO: Created: latency-svc-k8hnq
Feb  3 22:42:31.808: INFO: Got endpoints: latency-svc-k8hnq [1.061593799s]
Feb  3 22:42:31.831: INFO: Created: latency-svc-qkgtc
Feb  3 22:42:31.841: INFO: Got endpoints: latency-svc-qkgtc [970.459865ms]
Feb  3 22:42:31.966: INFO: Created: latency-svc-2pfct
Feb  3 22:42:31.971: INFO: Got endpoints: latency-svc-2pfct [1.048379618s]
Feb  3 22:42:32.017: INFO: Created: latency-svc-j7cj5
Feb  3 22:42:32.027: INFO: Got endpoints: latency-svc-j7cj5 [964.028802ms]
Feb  3 22:42:32.055: INFO: Created: latency-svc-wz56x
Feb  3 22:42:32.070: INFO: Got endpoints: latency-svc-wz56x [980.364848ms]
Feb  3 22:42:32.151: INFO: Created: latency-svc-g87nf
Feb  3 22:42:32.156: INFO: Got endpoints: latency-svc-g87nf [1.021763777s]
Feb  3 22:42:32.180: INFO: Created: latency-svc-cgpl9
Feb  3 22:42:32.191: INFO: Got endpoints: latency-svc-cgpl9 [961.521697ms]
Feb  3 22:42:32.220: INFO: Created: latency-svc-g75jv
Feb  3 22:42:32.414: INFO: Got endpoints: latency-svc-g75jv [1.14076095s]
Feb  3 22:42:32.419: INFO: Created: latency-svc-q2m8g
Feb  3 22:42:32.426: INFO: Got endpoints: latency-svc-q2m8g [1.032507213s]
Feb  3 22:42:32.601: INFO: Created: latency-svc-c65tz
Feb  3 22:42:32.624: INFO: Created: latency-svc-vkzn5
Feb  3 22:42:32.630: INFO: Got endpoints: latency-svc-c65tz [1.208440083s]
Feb  3 22:42:32.633: INFO: Got endpoints: latency-svc-vkzn5 [1.112925352s]
Feb  3 22:42:32.669: INFO: Created: latency-svc-vmbz7
Feb  3 22:42:32.672: INFO: Got endpoints: latency-svc-vmbz7 [1.117233319s]
Feb  3 22:42:32.735: INFO: Created: latency-svc-72g77
Feb  3 22:42:32.774: INFO: Got endpoints: latency-svc-72g77 [1.162260032s]
Feb  3 22:42:32.947: INFO: Created: latency-svc-8tp7q
Feb  3 22:42:32.947: INFO: Got endpoints: latency-svc-8tp7q [1.240914436s]
Feb  3 22:42:33.016: INFO: Created: latency-svc-k4ngd
Feb  3 22:42:33.019: INFO: Got endpoints: latency-svc-k4ngd [1.265733537s]
Feb  3 22:42:33.121: INFO: Created: latency-svc-sd7hd
Feb  3 22:42:33.121: INFO: Got endpoints: latency-svc-sd7hd [1.312319028s]
Feb  3 22:42:33.372: INFO: Created: latency-svc-jh2sw
Feb  3 22:42:33.386: INFO: Got endpoints: latency-svc-jh2sw [1.545106399s]
Feb  3 22:42:33.403: INFO: Created: latency-svc-mjwg9
Feb  3 22:42:33.418: INFO: Got endpoints: latency-svc-mjwg9 [1.446420951s]
Feb  3 22:42:33.524: INFO: Created: latency-svc-qqk5b
Feb  3 22:42:33.530: INFO: Got endpoints: latency-svc-qqk5b [1.502988645s]
Feb  3 22:42:33.558: INFO: Created: latency-svc-j96jg
Feb  3 22:42:33.568: INFO: Got endpoints: latency-svc-j96jg [1.497989989s]
Feb  3 22:42:33.591: INFO: Created: latency-svc-bjlm5
Feb  3 22:42:33.596: INFO: Got endpoints: latency-svc-bjlm5 [1.439296019s]
Feb  3 22:42:33.615: INFO: Created: latency-svc-vmbrw
Feb  3 22:42:33.715: INFO: Got endpoints: latency-svc-vmbrw [1.524338697s]
Feb  3 22:42:33.722: INFO: Created: latency-svc-k4h7c
Feb  3 22:42:33.729: INFO: Got endpoints: latency-svc-k4h7c [1.315108744s]
Feb  3 22:42:33.904: INFO: Created: latency-svc-7kjc5
Feb  3 22:42:33.911: INFO: Got endpoints: latency-svc-7kjc5 [1.484991836s]
Feb  3 22:42:33.951: INFO: Created: latency-svc-wgfr6
Feb  3 22:42:33.968: INFO: Got endpoints: latency-svc-wgfr6 [1.337510754s]
Feb  3 22:42:34.059: INFO: Created: latency-svc-5s7gb
Feb  3 22:42:34.079: INFO: Got endpoints: latency-svc-5s7gb [1.445315105s]
Feb  3 22:42:34.079: INFO: Created: latency-svc-m4557
Feb  3 22:42:34.112: INFO: Got endpoints: latency-svc-m4557 [1.439569059s]
Feb  3 22:42:34.139: INFO: Created: latency-svc-86l44
Feb  3 22:42:34.152: INFO: Got endpoints: latency-svc-86l44 [1.377177078s]
Feb  3 22:42:34.212: INFO: Created: latency-svc-nhb45
Feb  3 22:42:34.219: INFO: Got endpoints: latency-svc-nhb45 [1.271572181s]
Feb  3 22:42:34.364: INFO: Created: latency-svc-529mk
Feb  3 22:42:34.365: INFO: Got endpoints: latency-svc-529mk [1.345824594s]
Feb  3 22:42:34.397: INFO: Created: latency-svc-xs9qf
Feb  3 22:42:34.407: INFO: Got endpoints: latency-svc-xs9qf [1.28644053s]
Feb  3 22:42:34.426: INFO: Created: latency-svc-ktspv
Feb  3 22:42:34.435: INFO: Got endpoints: latency-svc-ktspv [1.048537355s]
Feb  3 22:42:34.462: INFO: Created: latency-svc-kbpn7
Feb  3 22:42:34.516: INFO: Got endpoints: latency-svc-kbpn7 [1.097613793s]
Feb  3 22:42:34.517: INFO: Created: latency-svc-rmppn
Feb  3 22:42:34.526: INFO: Got endpoints: latency-svc-rmppn [996.071764ms]
Feb  3 22:42:34.555: INFO: Created: latency-svc-h9mgp
Feb  3 22:42:34.574: INFO: Got endpoints: latency-svc-h9mgp [1.006473715s]
Feb  3 22:42:34.611: INFO: Created: latency-svc-6h6bm
Feb  3 22:42:34.659: INFO: Got endpoints: latency-svc-6h6bm [1.063455619s]
Feb  3 22:42:34.733: INFO: Created: latency-svc-jfmgm
Feb  3 22:42:34.741: INFO: Got endpoints: latency-svc-jfmgm [1.024485875s]
Feb  3 22:42:34.833: INFO: Created: latency-svc-kw7jw
Feb  3 22:42:34.869: INFO: Got endpoints: latency-svc-kw7jw [1.140213525s]
Feb  3 22:42:34.871: INFO: Created: latency-svc-ck9hk
Feb  3 22:42:34.892: INFO: Got endpoints: latency-svc-ck9hk [980.730125ms]
Feb  3 22:42:34.919: INFO: Created: latency-svc-mkg9q
Feb  3 22:42:34.990: INFO: Got endpoints: latency-svc-mkg9q [1.021551995s]
Feb  3 22:42:34.993: INFO: Created: latency-svc-2l499
Feb  3 22:42:35.012: INFO: Got endpoints: latency-svc-2l499 [932.476437ms]
Feb  3 22:42:35.055: INFO: Created: latency-svc-fzhkh
Feb  3 22:42:35.073: INFO: Got endpoints: latency-svc-fzhkh [960.667303ms]
Feb  3 22:42:35.120: INFO: Created: latency-svc-g76kh
Feb  3 22:42:35.131: INFO: Got endpoints: latency-svc-g76kh [979.08942ms]
Feb  3 22:42:35.202: INFO: Created: latency-svc-jklq4
Feb  3 22:42:35.217: INFO: Got endpoints: latency-svc-jklq4 [997.640578ms]
Feb  3 22:42:35.329: INFO: Created: latency-svc-xq6v4
Feb  3 22:42:35.335: INFO: Got endpoints: latency-svc-xq6v4 [969.891821ms]
Feb  3 22:42:35.390: INFO: Created: latency-svc-75bdq
Feb  3 22:42:35.404: INFO: Got endpoints: latency-svc-75bdq [996.748437ms]
Feb  3 22:42:35.426: INFO: Created: latency-svc-w99wl
Feb  3 22:42:35.426: INFO: Got endpoints: latency-svc-w99wl [991.088595ms]
Feb  3 22:42:36.503: INFO: Created: latency-svc-x7hbc
Feb  3 22:42:36.503: INFO: Got endpoints: latency-svc-x7hbc [1.987071649s]
Feb  3 22:42:37.115: INFO: Created: latency-svc-px8wz
Feb  3 22:42:37.132: INFO: Got endpoints: latency-svc-px8wz [2.604949799s]
Feb  3 22:42:37.268: INFO: Created: latency-svc-8cwgr
Feb  3 22:42:37.289: INFO: Got endpoints: latency-svc-8cwgr [2.71473041s]
Feb  3 22:42:37.397: INFO: Created: latency-svc-7fnkc
Feb  3 22:42:37.401: INFO: Got endpoints: latency-svc-7fnkc [2.741558794s]
Feb  3 22:42:37.447: INFO: Created: latency-svc-jl8gh
Feb  3 22:42:37.455: INFO: Got endpoints: latency-svc-jl8gh [2.71392273s]
Feb  3 22:42:37.473: INFO: Created: latency-svc-h8fk5
Feb  3 22:42:37.494: INFO: Got endpoints: latency-svc-h8fk5 [2.624998062s]
Feb  3 22:42:37.553: INFO: Created: latency-svc-nqdmm
Feb  3 22:42:37.562: INFO: Got endpoints: latency-svc-nqdmm [2.669788095s]
Feb  3 22:42:37.584: INFO: Created: latency-svc-hjg8h
Feb  3 22:42:37.594: INFO: Got endpoints: latency-svc-hjg8h [2.604293437s]
Feb  3 22:42:37.613: INFO: Created: latency-svc-lxvqd
Feb  3 22:42:37.639: INFO: Got endpoints: latency-svc-lxvqd [2.626365021s]
Feb  3 22:42:37.722: INFO: Created: latency-svc-tjf7d
Feb  3 22:42:37.722: INFO: Got endpoints: latency-svc-tjf7d [2.649512917s]
Feb  3 22:42:37.866: INFO: Created: latency-svc-m544c
Feb  3 22:42:37.872: INFO: Got endpoints: latency-svc-m544c [2.740325956s]
Feb  3 22:42:37.903: INFO: Created: latency-svc-q8b6q
Feb  3 22:42:37.919: INFO: Got endpoints: latency-svc-q8b6q [2.702791346s]
Feb  3 22:42:37.930: INFO: Created: latency-svc-mzqpt
Feb  3 22:42:37.938: INFO: Got endpoints: latency-svc-mzqpt [2.603210538s]
Feb  3 22:42:37.957: INFO: Created: latency-svc-nsr25
Feb  3 22:42:37.963: INFO: Got endpoints: latency-svc-nsr25 [2.558601038s]
Feb  3 22:42:38.019: INFO: Created: latency-svc-thwbl
Feb  3 22:42:38.024: INFO: Got endpoints: latency-svc-thwbl [2.597899415s]
Feb  3 22:42:38.055: INFO: Created: latency-svc-45f75
Feb  3 22:42:38.065: INFO: Got endpoints: latency-svc-45f75 [1.561663691s]
Feb  3 22:42:38.099: INFO: Created: latency-svc-tjzrm
Feb  3 22:42:38.117: INFO: Got endpoints: latency-svc-tjzrm [985.163106ms]
Feb  3 22:42:38.223: INFO: Created: latency-svc-sc7vg
Feb  3 22:42:38.666: INFO: Got endpoints: latency-svc-sc7vg [1.376023323s]
Feb  3 22:42:38.696: INFO: Created: latency-svc-dk6sk
Feb  3 22:42:38.700: INFO: Got endpoints: latency-svc-dk6sk [1.298779193s]
Feb  3 22:42:38.718: INFO: Created: latency-svc-2l6fk
Feb  3 22:42:38.725: INFO: Got endpoints: latency-svc-2l6fk [1.270264223s]
Feb  3 22:42:38.749: INFO: Created: latency-svc-m796l
Feb  3 22:42:38.814: INFO: Got endpoints: latency-svc-m796l [1.319495155s]
Feb  3 22:42:38.830: INFO: Created: latency-svc-hw6p9
Feb  3 22:42:38.837: INFO: Got endpoints: latency-svc-hw6p9 [1.2741691s]
Feb  3 22:42:38.857: INFO: Created: latency-svc-jljm7
Feb  3 22:42:38.897: INFO: Got endpoints: latency-svc-jljm7 [1.302651832s]
Feb  3 22:42:38.902: INFO: Created: latency-svc-g8lm4
Feb  3 22:42:39.068: INFO: Got endpoints: latency-svc-g8lm4 [1.429395908s]
Feb  3 22:42:39.073: INFO: Created: latency-svc-2mnxw
Feb  3 22:42:39.090: INFO: Got endpoints: latency-svc-2mnxw [1.367959188s]
Feb  3 22:42:39.140: INFO: Created: latency-svc-h8sqk
Feb  3 22:42:39.151: INFO: Got endpoints: latency-svc-h8sqk [1.278655226s]
Feb  3 22:42:39.269: INFO: Created: latency-svc-j66dd
Feb  3 22:42:39.298: INFO: Got endpoints: latency-svc-j66dd [1.37811936s]
Feb  3 22:42:39.325: INFO: Created: latency-svc-tdx25
Feb  3 22:42:39.329: INFO: Got endpoints: latency-svc-tdx25 [1.390375574s]
Feb  3 22:42:39.346: INFO: Created: latency-svc-xlq2p
Feb  3 22:42:39.423: INFO: Got endpoints: latency-svc-xlq2p [1.460505196s]
Feb  3 22:42:39.442: INFO: Created: latency-svc-j5szh
Feb  3 22:42:39.501: INFO: Got endpoints: latency-svc-j5szh [1.476781833s]
Feb  3 22:42:39.504: INFO: Created: latency-svc-9hgx6
Feb  3 22:42:39.513: INFO: Got endpoints: latency-svc-9hgx6 [1.447782855s]
Feb  3 22:42:39.858: INFO: Created: latency-svc-j6tzs
Feb  3 22:42:39.869: INFO: Got endpoints: latency-svc-j6tzs [1.751705024s]
Feb  3 22:42:39.916: INFO: Created: latency-svc-jlmgk
Feb  3 22:42:39.916: INFO: Got endpoints: latency-svc-jlmgk [1.250450562s]
Feb  3 22:42:39.947: INFO: Created: latency-svc-xc5jg
Feb  3 22:42:40.548: INFO: Created: latency-svc-cv79j
Feb  3 22:42:40.549: INFO: Got endpoints: latency-svc-xc5jg [1.849256561s]
Feb  3 22:42:40.600: INFO: Got endpoints: latency-svc-cv79j [1.875150212s]
Feb  3 22:42:40.627: INFO: Created: latency-svc-fdrp9
Feb  3 22:42:40.703: INFO: Got endpoints: latency-svc-fdrp9 [1.889292152s]
Feb  3 22:42:40.707: INFO: Created: latency-svc-8bfmp
Feb  3 22:42:40.718: INFO: Got endpoints: latency-svc-8bfmp [1.881111562s]
Feb  3 22:42:40.793: INFO: Created: latency-svc-r97bf
Feb  3 22:42:40.851: INFO: Got endpoints: latency-svc-r97bf [1.953898215s]
Feb  3 22:42:40.875: INFO: Created: latency-svc-8pq9t
Feb  3 22:42:40.890: INFO: Got endpoints: latency-svc-8pq9t [1.821603599s]
Feb  3 22:42:40.923: INFO: Created: latency-svc-p5m68
Feb  3 22:42:40.926: INFO: Got endpoints: latency-svc-p5m68 [1.835467378s]
Feb  3 22:42:40.950: INFO: Created: latency-svc-ddbwq
Feb  3 22:42:41.021: INFO: Got endpoints: latency-svc-ddbwq [1.87025816s]
Feb  3 22:42:41.036: INFO: Created: latency-svc-vn788
Feb  3 22:42:41.097: INFO: Got endpoints: latency-svc-vn788 [1.799454535s]
Feb  3 22:42:41.098: INFO: Created: latency-svc-mmhrk
Feb  3 22:42:41.104: INFO: Got endpoints: latency-svc-mmhrk [1.775651419s]
Feb  3 22:42:41.185: INFO: Created: latency-svc-2lz8k
Feb  3 22:42:41.251: INFO: Got endpoints: latency-svc-2lz8k [1.82715681s]
Feb  3 22:42:41.386: INFO: Created: latency-svc-hjc2q
Feb  3 22:42:41.403: INFO: Got endpoints: latency-svc-hjc2q [1.901769858s]
Feb  3 22:42:41.451: INFO: Created: latency-svc-dg7xr
Feb  3 22:42:41.464: INFO: Got endpoints: latency-svc-dg7xr [1.951666012s]
Feb  3 22:42:41.612: INFO: Created: latency-svc-4xbtd
Feb  3 22:42:41.635: INFO: Got endpoints: latency-svc-4xbtd [1.766158637s]
Feb  3 22:42:41.642: INFO: Created: latency-svc-bhlr7
Feb  3 22:42:41.647: INFO: Got endpoints: latency-svc-bhlr7 [1.730649319s]
Feb  3 22:42:41.684: INFO: Created: latency-svc-48qsv
Feb  3 22:42:41.747: INFO: Got endpoints: latency-svc-48qsv [1.197645609s]
Feb  3 22:42:41.770: INFO: Created: latency-svc-thf8c
Feb  3 22:42:41.780: INFO: Got endpoints: latency-svc-thf8c [1.17921521s]
Feb  3 22:42:41.816: INFO: Created: latency-svc-h77hc
Feb  3 22:42:41.831: INFO: Got endpoints: latency-svc-h77hc [1.127252727s]
Feb  3 22:42:41.954: INFO: Created: latency-svc-tp9gg
Feb  3 22:42:41.983: INFO: Got endpoints: latency-svc-tp9gg [1.265472188s]
Feb  3 22:42:41.985: INFO: Created: latency-svc-9mhwt
Feb  3 22:42:41.998: INFO: Got endpoints: latency-svc-9mhwt [1.146321368s]
Feb  3 22:42:42.022: INFO: Created: latency-svc-m2hzf
Feb  3 22:42:42.027: INFO: Got endpoints: latency-svc-m2hzf [1.136246958s]
Feb  3 22:42:42.044: INFO: Created: latency-svc-5cnh2
Feb  3 22:42:42.123: INFO: Got endpoints: latency-svc-5cnh2 [1.196756398s]
Feb  3 22:42:42.134: INFO: Created: latency-svc-ccpq8
Feb  3 22:42:42.143: INFO: Got endpoints: latency-svc-ccpq8 [1.121332407s]
Feb  3 22:42:42.175: INFO: Created: latency-svc-h4gj8
Feb  3 22:42:42.182: INFO: Got endpoints: latency-svc-h4gj8 [1.084013512s]
Feb  3 22:42:42.221: INFO: Created: latency-svc-ttgjf
Feb  3 22:42:42.346: INFO: Got endpoints: latency-svc-ttgjf [1.240902947s]
Feb  3 22:42:42.376: INFO: Created: latency-svc-64hs9
Feb  3 22:42:42.393: INFO: Got endpoints: latency-svc-64hs9 [1.142147689s]
Feb  3 22:42:42.443: INFO: Created: latency-svc-c5c7l
Feb  3 22:42:42.443: INFO: Got endpoints: latency-svc-c5c7l [1.039887291s]
Feb  3 22:42:42.532: INFO: Created: latency-svc-wz6x4
Feb  3 22:42:42.565: INFO: Got endpoints: latency-svc-wz6x4 [1.100610104s]
Feb  3 22:42:42.600: INFO: Created: latency-svc-z6r7z
Feb  3 22:42:42.670: INFO: Got endpoints: latency-svc-z6r7z [1.033836146s]
Feb  3 22:42:42.676: INFO: Created: latency-svc-d4tx8
Feb  3 22:42:42.682: INFO: Got endpoints: latency-svc-d4tx8 [1.035217716s]
Feb  3 22:42:42.719: INFO: Created: latency-svc-nc8pc
Feb  3 22:42:42.735: INFO: Got endpoints: latency-svc-nc8pc [987.789154ms]
Feb  3 22:42:42.902: INFO: Created: latency-svc-wgf9h
Feb  3 22:42:42.956: INFO: Got endpoints: latency-svc-wgf9h [1.175837199s]
Feb  3 22:42:42.957: INFO: Created: latency-svc-dknnq
Feb  3 22:42:42.966: INFO: Got endpoints: latency-svc-dknnq [1.134640571s]
Feb  3 22:42:43.101: INFO: Created: latency-svc-m9tgv
Feb  3 22:42:43.139: INFO: Got endpoints: latency-svc-m9tgv [1.155707363s]
Feb  3 22:42:43.185: INFO: Created: latency-svc-49vjz
Feb  3 22:42:43.195: INFO: Got endpoints: latency-svc-49vjz [1.19693971s]
Feb  3 22:42:43.319: INFO: Created: latency-svc-pgrkz
Feb  3 22:42:43.325: INFO: Got endpoints: latency-svc-pgrkz [1.297971766s]
Feb  3 22:42:43.369: INFO: Created: latency-svc-p5xqc
Feb  3 22:42:43.381: INFO: Got endpoints: latency-svc-p5xqc [1.257541899s]
Feb  3 22:42:43.465: INFO: Created: latency-svc-4792m
Feb  3 22:42:43.466: INFO: Got endpoints: latency-svc-4792m [1.323478817s]
Feb  3 22:42:43.675: INFO: Created: latency-svc-p8s6w
Feb  3 22:42:43.682: INFO: Got endpoints: latency-svc-p8s6w [1.499748945s]
Feb  3 22:42:43.705: INFO: Created: latency-svc-tz4nt
Feb  3 22:42:43.725: INFO: Got endpoints: latency-svc-tz4nt [1.378442313s]
Feb  3 22:42:43.725: INFO: Latencies: [66.312863ms 98.179411ms 150.252321ms 212.223495ms 226.10055ms 311.118783ms 349.527607ms 466.765067ms 489.712115ms 530.964868ms 614.028424ms 642.795396ms 677.969437ms 695.973692ms 838.494771ms 895.710163ms 901.389838ms 908.192997ms 923.182363ms 923.36534ms 932.476437ms 946.711764ms 960.667303ms 961.449604ms 961.521697ms 964.028802ms 969.891821ms 970.459865ms 976.668786ms 979.08942ms 980.364848ms 980.730125ms 983.085005ms 985.163106ms 986.713922ms 987.789154ms 990.501322ms 991.088595ms 995.811221ms 996.071764ms 996.748437ms 997.640578ms 1.002810737s 1.006473715s 1.007736399s 1.011424453s 1.021551995s 1.021763777s 1.022622609s 1.024485875s 1.028566159s 1.03147374s 1.032507213s 1.033836146s 1.034924939s 1.035217716s 1.035626686s 1.036315226s 1.036429062s 1.039887291s 1.041798498s 1.042672158s 1.042996422s 1.048348375s 1.048379618s 1.048537355s 1.048593799s 1.061593799s 1.062860756s 1.063455619s 1.068538666s 1.068640556s 1.074588154s 1.075271363s 1.084013512s 1.097564354s 1.097613793s 1.100610104s 1.112925352s 1.117233319s 1.121332407s 1.127252727s 1.130903261s 1.134640571s 1.136246958s 1.140213525s 1.14076095s 1.142147689s 1.146321368s 1.155707363s 1.162260032s 1.175837199s 1.17921521s 1.196756398s 1.19693971s 1.197645609s 1.208440083s 1.240902947s 1.240914436s 1.250450562s 1.257541899s 1.265472188s 1.265733537s 1.270264223s 1.271572181s 1.2741691s 1.278655226s 1.28644053s 1.297971766s 1.298779193s 1.299338869s 1.302651832s 1.312319028s 1.315108744s 1.319495155s 1.323478817s 1.33602117s 1.337510754s 1.340776037s 1.345824594s 1.352652552s 1.353608766s 1.358094907s 1.367959188s 1.370276487s 1.376023323s 1.377177078s 1.37811936s 1.378442313s 1.390375574s 1.394466466s 1.403049643s 1.427968408s 1.429395908s 1.438519352s 1.439296019s 1.439569059s 1.445315105s 1.446420951s 1.447782855s 1.453046516s 1.460505196s 1.476781833s 1.484991836s 1.497989989s 1.499748945s 1.502988645s 1.524338697s 1.545106399s 1.554205824s 1.557972329s 1.561084353s 1.561663691s 1.58853851s 1.631421776s 1.637912847s 1.643060302s 1.643686308s 1.644729464s 1.689125737s 1.694255497s 1.709496689s 1.71772941s 1.730649319s 1.730966042s 1.747043309s 1.751705024s 1.766158637s 1.775651419s 1.778152909s 1.799454535s 1.821603599s 1.82715681s 1.835467378s 1.835633508s 1.849256561s 1.85950656s 1.87025816s 1.875150212s 1.877782044s 1.881111562s 1.889292152s 1.901769858s 1.951666012s 1.953898215s 1.987071649s 2.558601038s 2.597899415s 2.603210538s 2.604293437s 2.604949799s 2.624998062s 2.626365021s 2.649512917s 2.669788095s 2.702791346s 2.71392273s 2.71473041s 2.740325956s 2.741558794s]
Feb  3 22:42:43.725: INFO: 50 %ile: 1.257541899s
Feb  3 22:42:43.725: INFO: 90 %ile: 1.881111562s
Feb  3 22:42:43.725: INFO: 99 %ile: 2.740325956s
Feb  3 22:42:43.725: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:42:43.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8432" for this suite.

• [SLOW TEST:24.927 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":236,"skipped":3942,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:42:43.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2115
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-2115
STEP: Creating statefulset with conflicting port in namespace statefulset-2115
STEP: Waiting until pod test-pod will start running in namespace statefulset-2115
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2115
Feb  3 22:42:58.310: INFO: Observed stateful pod in namespace: statefulset-2115, name: ss-0, uid: 806482b7-7a38-4715-82c9-6e6d9d85f6a3, status phase: Pending. Waiting for statefulset controller to delete.
Feb  3 22:43:03.107: INFO: Observed stateful pod in namespace: statefulset-2115, name: ss-0, uid: 806482b7-7a38-4715-82c9-6e6d9d85f6a3, status phase: Failed. Waiting for statefulset controller to delete.
Feb  3 22:43:03.163: INFO: Observed stateful pod in namespace: statefulset-2115, name: ss-0, uid: 806482b7-7a38-4715-82c9-6e6d9d85f6a3, status phase: Failed. Waiting for statefulset controller to delete.
Feb  3 22:43:03.187: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2115
STEP: Removing pod with conflicting port in namespace statefulset-2115
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2115 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  3 22:43:14.602: INFO: Deleting all statefulset in ns statefulset-2115
Feb  3 22:43:14.631: INFO: Scaling statefulset ss to 0
Feb  3 22:43:34.714: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:43:34.719: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:43:34.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2115" for this suite.

• [SLOW TEST:50.907 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":237,"skipped":3944,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:43:34.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 22:43:34.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80" in namespace "projected-1661" to be "success or failure"
Feb  3 22:43:34.954: INFO: Pod "downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80": Phase="Pending", Reason="", readiness=false. Elapsed: 27.900406ms
Feb  3 22:43:36.961: INFO: Pod "downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034698776s
Feb  3 22:43:38.969: INFO: Pod "downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042715889s
Feb  3 22:43:40.991: INFO: Pod "downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064497538s
Feb  3 22:43:42.996: INFO: Pod "downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06993405s
STEP: Saw pod success
Feb  3 22:43:42.996: INFO: Pod "downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80" satisfied condition "success or failure"
Feb  3 22:43:42.999: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80 container client-container: 
STEP: delete the pod
Feb  3 22:43:43.034: INFO: Waiting for pod downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80 to disappear
Feb  3 22:43:43.092: INFO: Pod downwardapi-volume-5953b8c5-2e35-4e7b-ba94-65f1cdf82f80 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:43:43.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1661" for this suite.

• [SLOW TEST:8.413 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3944,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:43:43.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  3 22:43:43.387: INFO: Waiting up to 5m0s for pod "pod-d5c8c28e-4def-44f9-8e66-bbe046631647" in namespace "emptydir-489" to be "success or failure"
Feb  3 22:43:43.394: INFO: Pod "pod-d5c8c28e-4def-44f9-8e66-bbe046631647": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588668ms
Feb  3 22:43:45.403: INFO: Pod "pod-d5c8c28e-4def-44f9-8e66-bbe046631647": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01591459s
Feb  3 22:43:47.410: INFO: Pod "pod-d5c8c28e-4def-44f9-8e66-bbe046631647": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022819537s
Feb  3 22:43:49.420: INFO: Pod "pod-d5c8c28e-4def-44f9-8e66-bbe046631647": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032812357s
Feb  3 22:43:51.431: INFO: Pod "pod-d5c8c28e-4def-44f9-8e66-bbe046631647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044120779s
STEP: Saw pod success
Feb  3 22:43:51.431: INFO: Pod "pod-d5c8c28e-4def-44f9-8e66-bbe046631647" satisfied condition "success or failure"
Feb  3 22:43:51.435: INFO: Trying to get logs from node jerma-node pod pod-d5c8c28e-4def-44f9-8e66-bbe046631647 container test-container: 
STEP: delete the pod
Feb  3 22:43:51.477: INFO: Waiting for pod pod-d5c8c28e-4def-44f9-8e66-bbe046631647 to disappear
Feb  3 22:43:51.559: INFO: Pod pod-d5c8c28e-4def-44f9-8e66-bbe046631647 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:43:51.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-489" for this suite.

• [SLOW TEST:8.374 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3946,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:43:51.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:44:04.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8954" for this suite.

• [SLOW TEST:13.396 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":240,"skipped":3949,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:44:04.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 22:44:05.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b" in namespace "downward-api-2463" to be "success or failure"
Feb  3 22:44:05.086: INFO: Pod "downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.827556ms
Feb  3 22:44:07.098: INFO: Pod "downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01788499s
Feb  3 22:44:09.110: INFO: Pod "downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029173357s
Feb  3 22:44:11.117: INFO: Pod "downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036275941s
Feb  3 22:44:13.124: INFO: Pod "downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04350656s
STEP: Saw pod success
Feb  3 22:44:13.124: INFO: Pod "downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b" satisfied condition "success or failure"
Feb  3 22:44:13.127: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b container client-container: 
STEP: delete the pod
Feb  3 22:44:13.256: INFO: Waiting for pod downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b to disappear
Feb  3 22:44:13.267: INFO: Pod downwardapi-volume-6c4f2cc9-fa7a-4f2f-a83d-601de1dd275b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:44:13.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2463" for this suite.

• [SLOW TEST:8.308 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3972,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:44:13.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-a01c706e-501e-4d40-b1ff-de5b8c93e397
STEP: Creating a pod to test consume configMaps
Feb  3 22:44:13.413: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742" in namespace "projected-9458" to be "success or failure"
Feb  3 22:44:13.422: INFO: Pod "pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742": Phase="Pending", Reason="", readiness=false. Elapsed: 8.794508ms
Feb  3 22:44:15.432: INFO: Pod "pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019030145s
Feb  3 22:44:17.457: INFO: Pod "pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044105692s
Feb  3 22:44:19.463: INFO: Pod "pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0504824s
Feb  3 22:44:21.471: INFO: Pod "pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057915161s
STEP: Saw pod success
Feb  3 22:44:21.471: INFO: Pod "pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742" satisfied condition "success or failure"
Feb  3 22:44:21.475: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 22:44:21.517: INFO: Waiting for pod pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742 to disappear
Feb  3 22:44:22.266: INFO: Pod pod-projected-configmaps-9840e0d5-30d1-418e-aeaa-fee0fccc8742 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:44:22.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9458" for this suite.

• [SLOW TEST:9.041 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3979,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:44:22.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7903 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7903;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7903 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7903;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7903.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7903.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7903.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7903.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7903.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7903.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7903.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7903.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7903.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7903.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7903.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 102.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.102_udp@PTR;check="$$(dig +tcp +noall +answer +search 102.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.102_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7903 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7903;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7903 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7903;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7903.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7903.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7903.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7903.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7903.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7903.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7903.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7903.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7903.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7903.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7903.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7903.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 102.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.102_udp@PTR;check="$$(dig +tcp +noall +answer +search 102.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.102_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 22:44:32.662: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.670: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.733: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.742: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.747: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.752: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.759: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.768: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.802: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.808: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.826: INFO: Unable to read jessie_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.833: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.838: INFO: Unable to read jessie_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.844: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.856: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.868: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:32.904: INFO: Lookups using dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7903 wheezy_tcp@dns-test-service.dns-7903 wheezy_udp@dns-test-service.dns-7903.svc wheezy_tcp@dns-test-service.dns-7903.svc wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7903 jessie_tcp@dns-test-service.dns-7903 jessie_udp@dns-test-service.dns-7903.svc jessie_tcp@dns-test-service.dns-7903.svc jessie_udp@_http._tcp.dns-test-service.dns-7903.svc jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc]

Feb  3 22:44:37.914: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:37.922: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:37.931: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:37.938: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:37.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:37.950: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:37.956: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:37.969: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:38.006: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:38.010: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:38.016: INFO: Unable to read jessie_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:38.024: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:38.028: INFO: Unable to read jessie_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:38.035: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:38.038: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:38.042: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:38.069: INFO: Lookups using dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7903 wheezy_tcp@dns-test-service.dns-7903 wheezy_udp@dns-test-service.dns-7903.svc wheezy_tcp@dns-test-service.dns-7903.svc wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7903 jessie_tcp@dns-test-service.dns-7903 jessie_udp@dns-test-service.dns-7903.svc jessie_tcp@dns-test-service.dns-7903.svc jessie_udp@_http._tcp.dns-test-service.dns-7903.svc jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc]

Feb  3 22:44:42.927: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.931: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.935: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.938: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.941: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.948: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.953: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.955: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.986: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.993: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.996: INFO: Unable to read jessie_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:42.998: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:43.001: INFO: Unable to read jessie_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:43.006: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:43.013: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:43.023: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:43.076: INFO: Lookups using dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7903 wheezy_tcp@dns-test-service.dns-7903 wheezy_udp@dns-test-service.dns-7903.svc wheezy_tcp@dns-test-service.dns-7903.svc wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7903 jessie_tcp@dns-test-service.dns-7903 jessie_udp@dns-test-service.dns-7903.svc jessie_tcp@dns-test-service.dns-7903.svc jessie_udp@_http._tcp.dns-test-service.dns-7903.svc jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc]

Feb  3 22:44:47.916: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.923: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.927: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.931: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.935: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.938: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.943: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.949: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.980: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.983: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.987: INFO: Unable to read jessie_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.990: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.996: INFO: Unable to read jessie_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:47.999: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:48.003: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:48.006: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:48.021: INFO: Lookups using dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7903 wheezy_tcp@dns-test-service.dns-7903 wheezy_udp@dns-test-service.dns-7903.svc wheezy_tcp@dns-test-service.dns-7903.svc wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7903 jessie_tcp@dns-test-service.dns-7903 jessie_udp@dns-test-service.dns-7903.svc jessie_tcp@dns-test-service.dns-7903.svc jessie_udp@_http._tcp.dns-test-service.dns-7903.svc jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc]

Feb  3 22:44:52.919: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:52.988: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:52.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.003: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.009: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.014: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.019: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.026: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.055: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.062: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.066: INFO: Unable to read jessie_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.070: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.101: INFO: Unable to read jessie_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.109: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.113: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.118: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:53.136: INFO: Lookups using dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7903 wheezy_tcp@dns-test-service.dns-7903 wheezy_udp@dns-test-service.dns-7903.svc wheezy_tcp@dns-test-service.dns-7903.svc wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7903 jessie_tcp@dns-test-service.dns-7903 jessie_udp@dns-test-service.dns-7903.svc jessie_tcp@dns-test-service.dns-7903.svc jessie_udp@_http._tcp.dns-test-service.dns-7903.svc jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc]

Feb  3 22:44:57.927: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:57.949: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:57.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:57.967: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:57.975: INFO: Unable to read wheezy_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:57.979: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:57.982: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:57.986: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:58.024: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:58.027: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:58.031: INFO: Unable to read jessie_udp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:58.038: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903 from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:58.046: INFO: Unable to read jessie_udp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:58.052: INFO: Unable to read jessie_tcp@dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:58.055: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:58.062: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc from pod dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f: the server could not find the requested resource (get pods dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f)
Feb  3 22:44:58.087: INFO: Lookups using dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7903 wheezy_tcp@dns-test-service.dns-7903 wheezy_udp@dns-test-service.dns-7903.svc wheezy_tcp@dns-test-service.dns-7903.svc wheezy_udp@_http._tcp.dns-test-service.dns-7903.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7903.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7903 jessie_tcp@dns-test-service.dns-7903 jessie_udp@dns-test-service.dns-7903.svc jessie_tcp@dns-test-service.dns-7903.svc jessie_udp@_http._tcp.dns-test-service.dns-7903.svc jessie_tcp@_http._tcp.dns-test-service.dns-7903.svc]

Feb  3 22:45:03.079: INFO: DNS probes using dns-7903/dns-test-29a7609c-9dc2-4a99-a42f-d68a427f7e1f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:45:03.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7903" for this suite.

• [SLOW TEST:41.049 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":243,"skipped":3998,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:45:03.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:45:14.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7509" for this suite.

• [SLOW TEST:11.209 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":244,"skipped":4001,"failed":0}
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:45:14.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-be2af225-4b0d-43c4-b055-907832d555c5
STEP: Creating a pod to test consume secrets
Feb  3 22:45:14.736: INFO: Waiting up to 5m0s for pod "pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c" in namespace "secrets-1753" to be "success or failure"
Feb  3 22:45:14.761: INFO: Pod "pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.875941ms
Feb  3 22:45:16.769: INFO: Pod "pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033237264s
Feb  3 22:45:18.777: INFO: Pod "pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041273409s
Feb  3 22:45:20.782: INFO: Pod "pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046331423s
Feb  3 22:45:22.790: INFO: Pod "pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054378008s
STEP: Saw pod success
Feb  3 22:45:22.790: INFO: Pod "pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c" satisfied condition "success or failure"
Feb  3 22:45:22.794: INFO: Trying to get logs from node jerma-node pod pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c container secret-volume-test: 
STEP: delete the pod
Feb  3 22:45:22.823: INFO: Waiting for pod pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c to disappear
Feb  3 22:45:22.831: INFO: Pod pod-secrets-12b1aaf7-1e4e-420e-b7f9-ff66a8e13a6c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:45:22.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1753" for this suite.

• [SLOW TEST:8.260 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4001,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:45:22.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0203 22:45:35.008992       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 22:45:35.009: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:45:35.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8646" for this suite.

• [SLOW TEST:12.173 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":246,"skipped":4032,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:45:35.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-9b3f60fe-34b4-4585-b7b4-0d6495a5ad02
STEP: Creating a pod to test consume configMaps
Feb  3 22:45:35.130: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9" in namespace "projected-6767" to be "success or failure"
Feb  3 22:45:35.143: INFO: Pod "pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.417438ms
Feb  3 22:45:37.161: INFO: Pod "pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030813065s
Feb  3 22:45:39.169: INFO: Pod "pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038690748s
Feb  3 22:45:41.175: INFO: Pod "pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044665833s
Feb  3 22:45:43.181: INFO: Pod "pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050295275s
STEP: Saw pod success
Feb  3 22:45:43.181: INFO: Pod "pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9" satisfied condition "success or failure"
Feb  3 22:45:43.183: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 22:45:43.477: INFO: Waiting for pod pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9 to disappear
Feb  3 22:45:43.531: INFO: Pod pod-projected-configmaps-2b51bef3-cca4-4667-be8a-e6dae2141ee9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:45:43.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6767" for this suite.

• [SLOW TEST:8.525 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4046,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:45:43.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0203 22:45:56.518587       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 22:45:56.519: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:45:56.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5252" for this suite.

• [SLOW TEST:12.986 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":248,"skipped":4063,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:45:56.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  3 22:46:00.375: INFO: Waiting up to 5m0s for pod "pod-38381fb6-14e9-4f50-a357-822ae44a7944" in namespace "emptydir-9895" to be "success or failure"
Feb  3 22:46:00.448: INFO: Pod "pod-38381fb6-14e9-4f50-a357-822ae44a7944": Phase="Pending", Reason="", readiness=false. Elapsed: 72.154428ms
Feb  3 22:46:02.899: INFO: Pod "pod-38381fb6-14e9-4f50-a357-822ae44a7944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.523980743s
Feb  3 22:46:04.948: INFO: Pod "pod-38381fb6-14e9-4f50-a357-822ae44a7944": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572620784s
Feb  3 22:46:06.989: INFO: Pod "pod-38381fb6-14e9-4f50-a357-822ae44a7944": Phase="Pending", Reason="", readiness=false. Elapsed: 6.613857947s
Feb  3 22:46:10.253: INFO: Pod "pod-38381fb6-14e9-4f50-a357-822ae44a7944": Phase="Pending", Reason="", readiness=false. Elapsed: 9.877456655s
Feb  3 22:46:12.258: INFO: Pod "pod-38381fb6-14e9-4f50-a357-822ae44a7944": Phase="Pending", Reason="", readiness=false. Elapsed: 11.8824807s
Feb  3 22:46:14.265: INFO: Pod "pod-38381fb6-14e9-4f50-a357-822ae44a7944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.889954373s
STEP: Saw pod success
Feb  3 22:46:14.266: INFO: Pod "pod-38381fb6-14e9-4f50-a357-822ae44a7944" satisfied condition "success or failure"
Feb  3 22:46:14.270: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-38381fb6-14e9-4f50-a357-822ae44a7944 container test-container: 
STEP: delete the pod
Feb  3 22:46:14.486: INFO: Waiting for pod pod-38381fb6-14e9-4f50-a357-822ae44a7944 to disappear
Feb  3 22:46:14.503: INFO: Pod pod-38381fb6-14e9-4f50-a357-822ae44a7944 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:46:14.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9895" for this suite.

• [SLOW TEST:18.064 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4063,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:46:14.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 22:46:16.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6" in namespace "downward-api-5062" to be "success or failure"
Feb  3 22:46:16.719: INFO: Pod "downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6": Phase="Pending", Reason="", readiness=false. Elapsed: 127.033181ms
Feb  3 22:46:18.727: INFO: Pod "downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134573742s
Feb  3 22:46:21.883: INFO: Pod "downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.290424602s
Feb  3 22:46:23.894: INFO: Pod "downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.302036785s
Feb  3 22:46:25.904: INFO: Pod "downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.311594678s
STEP: Saw pod success
Feb  3 22:46:25.904: INFO: Pod "downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6" satisfied condition "success or failure"
Feb  3 22:46:25.908: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6 container client-container: 
STEP: delete the pod
Feb  3 22:46:25.985: INFO: Waiting for pod downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6 to disappear
Feb  3 22:46:25.998: INFO: Pod downwardapi-volume-5e13a726-b326-4010-b733-9dfe274079e6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:46:25.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5062" for this suite.

• [SLOW TEST:11.410 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4067,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:46:26.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb  3 22:46:34.873: INFO: Successfully updated pod "labelsupdate9903c2b9-c1ae-4f94-a021-907f476e5241"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:46:36.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-760" for this suite.

• [SLOW TEST:10.915 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4096,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:46:36.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:46:46.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6359" for this suite.

• [SLOW TEST:9.250 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":252,"skipped":4132,"failed":0}
SSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:46:46.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  3 22:46:57.607: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  3 22:47:12.869: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:47:12.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2101" for this suite.

• [SLOW TEST:26.714 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":253,"skipped":4139,"failed":0}
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:47:12.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 22:47:13.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0" in namespace "downward-api-3670" to be "success or failure"
Feb  3 22:47:13.071: INFO: Pod "downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.678285ms
Feb  3 22:47:15.080: INFO: Pod "downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031708474s
Feb  3 22:47:17.087: INFO: Pod "downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038535568s
Feb  3 22:47:19.101: INFO: Pod "downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053267368s
Feb  3 22:47:21.107: INFO: Pod "downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058779721s
STEP: Saw pod success
Feb  3 22:47:21.107: INFO: Pod "downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0" satisfied condition "success or failure"
Feb  3 22:47:21.110: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0 container client-container: 
STEP: delete the pod
Feb  3 22:47:21.186: INFO: Waiting for pod downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0 to disappear
Feb  3 22:47:21.199: INFO: Pod downwardapi-volume-6c0060ba-f7db-4930-b4de-21903a0743a0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:47:21.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3670" for this suite.

• [SLOW TEST:8.310 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4139,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:47:21.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Feb  3 22:47:21.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5682'
Feb  3 22:47:24.654: INFO: stderr: ""
Feb  3 22:47:24.654: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 22:47:24.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5682'
Feb  3 22:47:24.803: INFO: stderr: ""
Feb  3 22:47:24.803: INFO: stdout: "update-demo-nautilus-r9ssh update-demo-nautilus-s98x2 "
Feb  3 22:47:24.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9ssh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:47:25.030: INFO: stderr: ""
Feb  3 22:47:25.031: INFO: stdout: ""
Feb  3 22:47:25.031: INFO: update-demo-nautilus-r9ssh is created but not running
Feb  3 22:47:30.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5682'
Feb  3 22:47:31.778: INFO: stderr: ""
Feb  3 22:47:31.778: INFO: stdout: "update-demo-nautilus-r9ssh update-demo-nautilus-s98x2 "
Feb  3 22:47:31.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9ssh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:47:32.183: INFO: stderr: ""
Feb  3 22:47:32.183: INFO: stdout: ""
Feb  3 22:47:32.183: INFO: update-demo-nautilus-r9ssh is created but not running
Feb  3 22:47:37.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5682'
Feb  3 22:47:37.412: INFO: stderr: ""
Feb  3 22:47:37.412: INFO: stdout: "update-demo-nautilus-r9ssh update-demo-nautilus-s98x2 "
Feb  3 22:47:37.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9ssh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:47:37.546: INFO: stderr: ""
Feb  3 22:47:37.546: INFO: stdout: "true"
Feb  3 22:47:37.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9ssh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:47:37.656: INFO: stderr: ""
Feb  3 22:47:37.656: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 22:47:37.657: INFO: validating pod update-demo-nautilus-r9ssh
Feb  3 22:47:37.665: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 22:47:37.666: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 22:47:37.666: INFO: update-demo-nautilus-r9ssh is verified up and running
Feb  3 22:47:37.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s98x2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:47:37.765: INFO: stderr: ""
Feb  3 22:47:37.765: INFO: stdout: "true"
Feb  3 22:47:37.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s98x2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:47:37.881: INFO: stderr: ""
Feb  3 22:47:37.881: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 22:47:37.881: INFO: validating pod update-demo-nautilus-s98x2
Feb  3 22:47:37.890: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 22:47:37.890: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 22:47:37.890: INFO: update-demo-nautilus-s98x2 is verified up and running
STEP: scaling down the replication controller
Feb  3 22:47:37.895: INFO: scanned /root for discovery docs: 
Feb  3 22:47:37.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5682'
Feb  3 22:47:39.089: INFO: stderr: ""
Feb  3 22:47:39.089: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 22:47:39.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5682'
Feb  3 22:47:39.321: INFO: stderr: ""
Feb  3 22:47:39.321: INFO: stdout: "update-demo-nautilus-r9ssh update-demo-nautilus-s98x2 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  3 22:47:44.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5682'
Feb  3 22:47:44.435: INFO: stderr: ""
Feb  3 22:47:44.435: INFO: stdout: "update-demo-nautilus-r9ssh update-demo-nautilus-s98x2 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  3 22:47:49.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5682'
Feb  3 22:47:49.629: INFO: stderr: ""
Feb  3 22:47:49.629: INFO: stdout: "update-demo-nautilus-r9ssh update-demo-nautilus-s98x2 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  3 22:47:54.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5682'
Feb  3 22:47:54.802: INFO: stderr: ""
Feb  3 22:47:54.802: INFO: stdout: "update-demo-nautilus-r9ssh "
Feb  3 22:47:54.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9ssh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:47:54.976: INFO: stderr: ""
Feb  3 22:47:54.976: INFO: stdout: "true"
Feb  3 22:47:54.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9ssh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:47:55.106: INFO: stderr: ""
Feb  3 22:47:55.106: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 22:47:55.106: INFO: validating pod update-demo-nautilus-r9ssh
Feb  3 22:47:55.115: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 22:47:55.115: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 22:47:55.115: INFO: update-demo-nautilus-r9ssh is verified up and running
STEP: scaling up the replication controller
Feb  3 22:47:55.118: INFO: scanned /root for discovery docs: 
Feb  3 22:47:55.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5682'
Feb  3 22:47:56.392: INFO: stderr: ""
Feb  3 22:47:56.392: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 22:47:56.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5682'
Feb  3 22:47:56.622: INFO: stderr: ""
Feb  3 22:47:56.622: INFO: stdout: "update-demo-nautilus-ddz8s update-demo-nautilus-r9ssh "
Feb  3 22:47:56.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddz8s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:47:56.726: INFO: stderr: ""
Feb  3 22:47:56.726: INFO: stdout: ""
Feb  3 22:47:56.726: INFO: update-demo-nautilus-ddz8s is created but not running
Feb  3 22:48:01.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5682'
Feb  3 22:48:01.924: INFO: stderr: ""
Feb  3 22:48:01.924: INFO: stdout: "update-demo-nautilus-ddz8s update-demo-nautilus-r9ssh "
Feb  3 22:48:01.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddz8s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:48:02.100: INFO: stderr: ""
Feb  3 22:48:02.100: INFO: stdout: "true"
Feb  3 22:48:02.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddz8s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:48:02.270: INFO: stderr: ""
Feb  3 22:48:02.270: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 22:48:02.270: INFO: validating pod update-demo-nautilus-ddz8s
Feb  3 22:48:02.276: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 22:48:02.276: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 22:48:02.276: INFO: update-demo-nautilus-ddz8s is verified up and running
Feb  3 22:48:02.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9ssh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:48:02.398: INFO: stderr: ""
Feb  3 22:48:02.398: INFO: stdout: "true"
Feb  3 22:48:02.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9ssh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5682'
Feb  3 22:48:02.512: INFO: stderr: ""
Feb  3 22:48:02.512: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 22:48:02.512: INFO: validating pod update-demo-nautilus-r9ssh
Feb  3 22:48:02.519: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 22:48:02.520: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 22:48:02.520: INFO: update-demo-nautilus-r9ssh is verified up and running
STEP: using delete to clean up resources
Feb  3 22:48:02.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5682'
Feb  3 22:48:02.643: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 22:48:02.643: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  3 22:48:02.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5682'
Feb  3 22:48:02.755: INFO: stderr: "No resources found in kubectl-5682 namespace.\n"
Feb  3 22:48:02.755: INFO: stdout: ""
Feb  3 22:48:02.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5682 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  3 22:48:02.909: INFO: stderr: ""
Feb  3 22:48:02.909: INFO: stdout: "update-demo-nautilus-ddz8s\nupdate-demo-nautilus-r9ssh\n"
Feb  3 22:48:03.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5682'
Feb  3 22:48:03.660: INFO: stderr: "No resources found in kubectl-5682 namespace.\n"
Feb  3 22:48:03.660: INFO: stdout: ""
Feb  3 22:48:03.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5682 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  3 22:48:03.858: INFO: stderr: ""
Feb  3 22:48:03.858: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:48:03.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5682" for this suite.

• [SLOW TEST:42.725 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":255,"skipped":4156,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:48:03.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:48:05.675: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb  3 22:48:08.829: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:48:08.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5016" for this suite.

• [SLOW TEST:5.603 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":256,"skipped":4160,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:48:09.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-6995eabc-a46f-469c-bf01-00db28c4985e
STEP: Creating a pod to test consume secrets
Feb  3 22:48:10.942: INFO: Waiting up to 5m0s for pod "pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317" in namespace "secrets-6896" to be "success or failure"
Feb  3 22:48:10.951: INFO: Pod "pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317": Phase="Pending", Reason="", readiness=false. Elapsed: 8.393295ms
Feb  3 22:48:13.789: INFO: Pod "pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.84671803s
Feb  3 22:48:15.800: INFO: Pod "pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317": Phase="Pending", Reason="", readiness=false. Elapsed: 4.85741395s
Feb  3 22:48:17.812: INFO: Pod "pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317": Phase="Pending", Reason="", readiness=false. Elapsed: 6.869723239s
Feb  3 22:48:19.827: INFO: Pod "pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317": Phase="Pending", Reason="", readiness=false. Elapsed: 8.884287636s
Feb  3 22:48:21.877: INFO: Pod "pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317": Phase="Pending", Reason="", readiness=false. Elapsed: 10.934524947s
Feb  3 22:48:23.885: INFO: Pod "pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.943078613s
STEP: Saw pod success
Feb  3 22:48:23.886: INFO: Pod "pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317" satisfied condition "success or failure"
Feb  3 22:48:23.891: INFO: Trying to get logs from node jerma-node pod pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317 container secret-volume-test: 
STEP: delete the pod
Feb  3 22:48:24.014: INFO: Waiting for pod pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317 to disappear
Feb  3 22:48:24.031: INFO: Pod pod-secrets-46b6e715-2c3e-4bad-ba2b-0d191ea62317 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:48:24.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6896" for this suite.

• [SLOW TEST:14.498 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4183,"failed":0}
SSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:48:24.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-56b51975-aee9-4c64-8ff9-a2f9bd2b5f38
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:48:24.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7746" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":258,"skipped":4187,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:48:24.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  3 22:48:24.350: INFO: Waiting up to 5m0s for pod "pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994" in namespace "emptydir-6361" to be "success or failure"
Feb  3 22:48:24.356: INFO: Pod "pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994": Phase="Pending", Reason="", readiness=false. Elapsed: 5.513974ms
Feb  3 22:48:26.817: INFO: Pod "pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466819855s
Feb  3 22:48:28.824: INFO: Pod "pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472969351s
Feb  3 22:48:30.831: INFO: Pod "pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994": Phase="Pending", Reason="", readiness=false. Elapsed: 6.480185432s
Feb  3 22:48:32.835: INFO: Pod "pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994": Phase="Pending", Reason="", readiness=false. Elapsed: 8.484906702s
Feb  3 22:48:34.842: INFO: Pod "pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.491093442s
STEP: Saw pod success
Feb  3 22:48:34.842: INFO: Pod "pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994" satisfied condition "success or failure"
Feb  3 22:48:34.846: INFO: Trying to get logs from node jerma-node pod pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994 container test-container: 
STEP: delete the pod
Feb  3 22:48:35.039: INFO: Waiting for pod pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994 to disappear
Feb  3 22:48:35.139: INFO: Pod pod-552ca552-8b4b-469a-9ffb-c48ff8f7a994 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:48:35.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6361" for this suite.

• [SLOW TEST:10.975 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4199,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:48:35.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb  3 22:48:43.941: INFO: Successfully updated pod "annotationupdatecd349a86-b5b9-4332-bc90-77ab153117af"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:48:45.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1172" for this suite.

• [SLOW TEST:10.831 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4201,"failed":0}
S
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:48:46.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:48:46.181: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8c250f9f-bee9-4e64-8d6a-ad400e95b1dd" in namespace "security-context-test-6714" to be "success or failure"
Feb  3 22:48:46.190: INFO: Pod "busybox-user-65534-8c250f9f-bee9-4e64-8d6a-ad400e95b1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.339872ms
Feb  3 22:48:48.198: INFO: Pod "busybox-user-65534-8c250f9f-bee9-4e64-8d6a-ad400e95b1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016811183s
Feb  3 22:48:50.206: INFO: Pod "busybox-user-65534-8c250f9f-bee9-4e64-8d6a-ad400e95b1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02471622s
Feb  3 22:48:52.231: INFO: Pod "busybox-user-65534-8c250f9f-bee9-4e64-8d6a-ad400e95b1dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050086045s
Feb  3 22:48:54.283: INFO: Pod "busybox-user-65534-8c250f9f-bee9-4e64-8d6a-ad400e95b1dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102408239s
Feb  3 22:48:54.284: INFO: Pod "busybox-user-65534-8c250f9f-bee9-4e64-8d6a-ad400e95b1dd" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:48:54.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6714" for this suite.

• [SLOW TEST:8.331 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4202,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:48:54.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:49:10.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9313" for this suite.

• [SLOW TEST:16.214 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":262,"skipped":4241,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:49:10.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:49:10.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9455" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":263,"skipped":4250,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:49:10.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:49:10.769: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:49:11.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7574" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":264,"skipped":4258,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:49:11.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  3 22:49:11.767: INFO: Waiting up to 5m0s for pod "pod-88cba455-eb88-4694-8433-7c4b43465516" in namespace "emptydir-3036" to be "success or failure"
Feb  3 22:49:11.781: INFO: Pod "pod-88cba455-eb88-4694-8433-7c4b43465516": Phase="Pending", Reason="", readiness=false. Elapsed: 13.162004ms
Feb  3 22:49:13.809: INFO: Pod "pod-88cba455-eb88-4694-8433-7c4b43465516": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041910588s
Feb  3 22:49:15.815: INFO: Pod "pod-88cba455-eb88-4694-8433-7c4b43465516": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047485638s
Feb  3 22:49:17.825: INFO: Pod "pod-88cba455-eb88-4694-8433-7c4b43465516": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057014177s
Feb  3 22:49:19.836: INFO: Pod "pod-88cba455-eb88-4694-8433-7c4b43465516": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068388772s
STEP: Saw pod success
Feb  3 22:49:19.836: INFO: Pod "pod-88cba455-eb88-4694-8433-7c4b43465516" satisfied condition "success or failure"
Feb  3 22:49:19.840: INFO: Trying to get logs from node jerma-node pod pod-88cba455-eb88-4694-8433-7c4b43465516 container test-container: 
STEP: delete the pod
Feb  3 22:49:20.679: INFO: Waiting for pod pod-88cba455-eb88-4694-8433-7c4b43465516 to disappear
Feb  3 22:49:20.738: INFO: Pod pod-88cba455-eb88-4694-8433-7c4b43465516 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:49:20.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3036" for this suite.

• [SLOW TEST:9.280 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4263,"failed":0}
SSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:49:20.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  3 22:49:39.300: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:39.300: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:39.371374       8 log.go:172] (0xc00141c370) (0xc000d78140) Create stream
I0203 22:49:39.371478       8 log.go:172] (0xc00141c370) (0xc000d78140) Stream added, broadcasting: 1
I0203 22:49:39.379117       8 log.go:172] (0xc00141c370) Reply frame received for 1
I0203 22:49:39.379211       8 log.go:172] (0xc00141c370) (0xc000d781e0) Create stream
I0203 22:49:39.379446       8 log.go:172] (0xc00141c370) (0xc000d781e0) Stream added, broadcasting: 3
I0203 22:49:39.381825       8 log.go:172] (0xc00141c370) Reply frame received for 3
I0203 22:49:39.381871       8 log.go:172] (0xc00141c370) (0xc00131a1e0) Create stream
I0203 22:49:39.381881       8 log.go:172] (0xc00141c370) (0xc00131a1e0) Stream added, broadcasting: 5
I0203 22:49:39.383954       8 log.go:172] (0xc00141c370) Reply frame received for 5
I0203 22:49:39.486616       8 log.go:172] (0xc00141c370) Data frame received for 3
I0203 22:49:39.486691       8 log.go:172] (0xc000d781e0) (3) Data frame handling
I0203 22:49:39.486716       8 log.go:172] (0xc000d781e0) (3) Data frame sent
I0203 22:49:39.584757       8 log.go:172] (0xc00141c370) Data frame received for 1
I0203 22:49:39.584922       8 log.go:172] (0xc00141c370) (0xc000d781e0) Stream removed, broadcasting: 3
I0203 22:49:39.585090       8 log.go:172] (0xc000d78140) (1) Data frame handling
I0203 22:49:39.585259       8 log.go:172] (0xc000d78140) (1) Data frame sent
I0203 22:49:39.585324       8 log.go:172] (0xc00141c370) (0xc00131a1e0) Stream removed, broadcasting: 5
I0203 22:49:39.585424       8 log.go:172] (0xc00141c370) (0xc000d78140) Stream removed, broadcasting: 1
I0203 22:49:39.585510       8 log.go:172] (0xc00141c370) Go away received
I0203 22:49:39.586030       8 log.go:172] (0xc00141c370) (0xc000d78140) Stream removed, broadcasting: 1
I0203 22:49:39.586133       8 log.go:172] (0xc00141c370) (0xc000d781e0) Stream removed, broadcasting: 3
I0203 22:49:39.586167       8 log.go:172] (0xc00141c370) (0xc00131a1e0) Stream removed, broadcasting: 5
Feb  3 22:49:39.586: INFO: Exec stderr: ""
Feb  3 22:49:39.586: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:39.586: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:39.659009       8 log.go:172] (0xc00259a6e0) (0xc000af7f40) Create stream
I0203 22:49:39.659069       8 log.go:172] (0xc00259a6e0) (0xc000af7f40) Stream added, broadcasting: 1
I0203 22:49:39.663765       8 log.go:172] (0xc00259a6e0) Reply frame received for 1
I0203 22:49:39.663795       8 log.go:172] (0xc00259a6e0) (0xc000f295e0) Create stream
I0203 22:49:39.663802       8 log.go:172] (0xc00259a6e0) (0xc000f295e0) Stream added, broadcasting: 3
I0203 22:49:39.664888       8 log.go:172] (0xc00259a6e0) Reply frame received for 3
I0203 22:49:39.664916       8 log.go:172] (0xc00259a6e0) (0xc00111caa0) Create stream
I0203 22:49:39.664926       8 log.go:172] (0xc00259a6e0) (0xc00111caa0) Stream added, broadcasting: 5
I0203 22:49:39.665937       8 log.go:172] (0xc00259a6e0) Reply frame received for 5
I0203 22:49:39.726842       8 log.go:172] (0xc00259a6e0) Data frame received for 3
I0203 22:49:39.726905       8 log.go:172] (0xc000f295e0) (3) Data frame handling
I0203 22:49:39.726929       8 log.go:172] (0xc000f295e0) (3) Data frame sent
I0203 22:49:39.789837       8 log.go:172] (0xc00259a6e0) Data frame received for 1
I0203 22:49:39.789930       8 log.go:172] (0xc00259a6e0) (0xc000f295e0) Stream removed, broadcasting: 3
I0203 22:49:39.789964       8 log.go:172] (0xc000af7f40) (1) Data frame handling
I0203 22:49:39.789981       8 log.go:172] (0xc000af7f40) (1) Data frame sent
I0203 22:49:39.789995       8 log.go:172] (0xc00259a6e0) (0xc000af7f40) Stream removed, broadcasting: 1
I0203 22:49:39.790051       8 log.go:172] (0xc00259a6e0) (0xc00111caa0) Stream removed, broadcasting: 5
I0203 22:49:39.790170       8 log.go:172] (0xc00259a6e0) Go away received
I0203 22:49:39.790361       8 log.go:172] (0xc00259a6e0) (0xc000af7f40) Stream removed, broadcasting: 1
I0203 22:49:39.790483       8 log.go:172] (0xc00259a6e0) (0xc000f295e0) Stream removed, broadcasting: 3
I0203 22:49:39.790510       8 log.go:172] (0xc00259a6e0) (0xc00111caa0) Stream removed, broadcasting: 5
Feb  3 22:49:39.790: INFO: Exec stderr: ""
Feb  3 22:49:39.790: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:39.790: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:39.833255       8 log.go:172] (0xc001b6f550) (0xc00111ce60) Create stream
I0203 22:49:39.833399       8 log.go:172] (0xc001b6f550) (0xc00111ce60) Stream added, broadcasting: 1
I0203 22:49:39.840834       8 log.go:172] (0xc001b6f550) Reply frame received for 1
I0203 22:49:39.840935       8 log.go:172] (0xc001b6f550) (0xc000f29860) Create stream
I0203 22:49:39.840967       8 log.go:172] (0xc001b6f550) (0xc000f29860) Stream added, broadcasting: 3
I0203 22:49:39.845216       8 log.go:172] (0xc001b6f550) Reply frame received for 3
I0203 22:49:39.845261       8 log.go:172] (0xc001b6f550) (0xc000f29a40) Create stream
I0203 22:49:39.845280       8 log.go:172] (0xc001b6f550) (0xc000f29a40) Stream added, broadcasting: 5
I0203 22:49:39.847467       8 log.go:172] (0xc001b6f550) Reply frame received for 5
I0203 22:49:39.931814       8 log.go:172] (0xc001b6f550) Data frame received for 3
I0203 22:49:39.932003       8 log.go:172] (0xc000f29860) (3) Data frame handling
I0203 22:49:39.932167       8 log.go:172] (0xc000f29860) (3) Data frame sent
I0203 22:49:39.997232       8 log.go:172] (0xc001b6f550) (0xc000f29860) Stream removed, broadcasting: 3
I0203 22:49:39.997440       8 log.go:172] (0xc001b6f550) Data frame received for 1
I0203 22:49:39.997453       8 log.go:172] (0xc00111ce60) (1) Data frame handling
I0203 22:49:39.997467       8 log.go:172] (0xc00111ce60) (1) Data frame sent
I0203 22:49:39.997478       8 log.go:172] (0xc001b6f550) (0xc00111ce60) Stream removed, broadcasting: 1
I0203 22:49:39.997716       8 log.go:172] (0xc001b6f550) (0xc000f29a40) Stream removed, broadcasting: 5
I0203 22:49:39.997862       8 log.go:172] (0xc001b6f550) Go away received
I0203 22:49:39.997951       8 log.go:172] (0xc001b6f550) (0xc00111ce60) Stream removed, broadcasting: 1
I0203 22:49:39.997986       8 log.go:172] (0xc001b6f550) (0xc000f29860) Stream removed, broadcasting: 3
I0203 22:49:39.997996       8 log.go:172] (0xc001b6f550) (0xc000f29a40) Stream removed, broadcasting: 5
Feb  3 22:49:39.998: INFO: Exec stderr: ""
Feb  3 22:49:39.998: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:39.998: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:40.042052       8 log.go:172] (0xc001b6fb80) (0xc00111d2c0) Create stream
I0203 22:49:40.042089       8 log.go:172] (0xc001b6fb80) (0xc00111d2c0) Stream added, broadcasting: 1
I0203 22:49:40.046213       8 log.go:172] (0xc001b6fb80) Reply frame received for 1
I0203 22:49:40.046273       8 log.go:172] (0xc001b6fb80) (0xc00131a460) Create stream
I0203 22:49:40.046301       8 log.go:172] (0xc001b6fb80) (0xc00131a460) Stream added, broadcasting: 3
I0203 22:49:40.047812       8 log.go:172] (0xc001b6fb80) Reply frame received for 3
I0203 22:49:40.047848       8 log.go:172] (0xc001b6fb80) (0xc00131a960) Create stream
I0203 22:49:40.047862       8 log.go:172] (0xc001b6fb80) (0xc00131a960) Stream added, broadcasting: 5
I0203 22:49:40.049303       8 log.go:172] (0xc001b6fb80) Reply frame received for 5
I0203 22:49:40.120373       8 log.go:172] (0xc001b6fb80) Data frame received for 3
I0203 22:49:40.120411       8 log.go:172] (0xc00131a460) (3) Data frame handling
I0203 22:49:40.120442       8 log.go:172] (0xc00131a460) (3) Data frame sent
I0203 22:49:40.188831       8 log.go:172] (0xc001b6fb80) (0xc00131a460) Stream removed, broadcasting: 3
I0203 22:49:40.188913       8 log.go:172] (0xc001b6fb80) Data frame received for 1
I0203 22:49:40.188972       8 log.go:172] (0xc001b6fb80) (0xc00131a960) Stream removed, broadcasting: 5
I0203 22:49:40.189055       8 log.go:172] (0xc00111d2c0) (1) Data frame handling
I0203 22:49:40.189084       8 log.go:172] (0xc00111d2c0) (1) Data frame sent
I0203 22:49:40.189107       8 log.go:172] (0xc001b6fb80) (0xc00111d2c0) Stream removed, broadcasting: 1
I0203 22:49:40.189159       8 log.go:172] (0xc001b6fb80) Go away received
I0203 22:49:40.189417       8 log.go:172] (0xc001b6fb80) (0xc00111d2c0) Stream removed, broadcasting: 1
I0203 22:49:40.189434       8 log.go:172] (0xc001b6fb80) (0xc00131a460) Stream removed, broadcasting: 3
I0203 22:49:40.189443       8 log.go:172] (0xc001b6fb80) (0xc00131a960) Stream removed, broadcasting: 5
Feb  3 22:49:40.189: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  3 22:49:40.189: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:40.189: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:40.225025       8 log.go:172] (0xc00259ad10) (0xc0010783c0) Create stream
I0203 22:49:40.225091       8 log.go:172] (0xc00259ad10) (0xc0010783c0) Stream added, broadcasting: 1
I0203 22:49:40.229313       8 log.go:172] (0xc00259ad10) Reply frame received for 1
I0203 22:49:40.229346       8 log.go:172] (0xc00259ad10) (0xc00131aa00) Create stream
I0203 22:49:40.229358       8 log.go:172] (0xc00259ad10) (0xc00131aa00) Stream added, broadcasting: 3
I0203 22:49:40.230910       8 log.go:172] (0xc00259ad10) Reply frame received for 3
I0203 22:49:40.230969       8 log.go:172] (0xc00259ad10) (0xc00111d400) Create stream
I0203 22:49:40.230993       8 log.go:172] (0xc00259ad10) (0xc00111d400) Stream added, broadcasting: 5
I0203 22:49:40.232369       8 log.go:172] (0xc00259ad10) Reply frame received for 5
I0203 22:49:40.309673       8 log.go:172] (0xc00259ad10) Data frame received for 3
I0203 22:49:40.309738       8 log.go:172] (0xc00131aa00) (3) Data frame handling
I0203 22:49:40.309759       8 log.go:172] (0xc00131aa00) (3) Data frame sent
I0203 22:49:40.375160       8 log.go:172] (0xc00259ad10) Data frame received for 1
I0203 22:49:40.375189       8 log.go:172] (0xc0010783c0) (1) Data frame handling
I0203 22:49:40.375213       8 log.go:172] (0xc0010783c0) (1) Data frame sent
I0203 22:49:40.375257       8 log.go:172] (0xc00259ad10) (0xc00131aa00) Stream removed, broadcasting: 3
I0203 22:49:40.375413       8 log.go:172] (0xc00259ad10) (0xc0010783c0) Stream removed, broadcasting: 1
I0203 22:49:40.376171       8 log.go:172] (0xc00259ad10) (0xc00111d400) Stream removed, broadcasting: 5
I0203 22:49:40.376222       8 log.go:172] (0xc00259ad10) Go away received
I0203 22:49:40.376405       8 log.go:172] (0xc00259ad10) (0xc0010783c0) Stream removed, broadcasting: 1
I0203 22:49:40.376451       8 log.go:172] (0xc00259ad10) (0xc00131aa00) Stream removed, broadcasting: 3
I0203 22:49:40.376473       8 log.go:172] (0xc00259ad10) (0xc00111d400) Stream removed, broadcasting: 5
Feb  3 22:49:40.376: INFO: Exec stderr: ""
Feb  3 22:49:40.376: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:40.376: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:40.415750       8 log.go:172] (0xc002ed4210) (0xc00131b180) Create stream
I0203 22:49:40.415828       8 log.go:172] (0xc002ed4210) (0xc00131b180) Stream added, broadcasting: 1
I0203 22:49:40.419629       8 log.go:172] (0xc002ed4210) Reply frame received for 1
I0203 22:49:40.419689       8 log.go:172] (0xc002ed4210) (0xc001078500) Create stream
I0203 22:49:40.419701       8 log.go:172] (0xc002ed4210) (0xc001078500) Stream added, broadcasting: 3
I0203 22:49:40.420613       8 log.go:172] (0xc002ed4210) Reply frame received for 3
I0203 22:49:40.420760       8 log.go:172] (0xc002ed4210) (0xc000f29cc0) Create stream
I0203 22:49:40.420794       8 log.go:172] (0xc002ed4210) (0xc000f29cc0) Stream added, broadcasting: 5
I0203 22:49:40.421830       8 log.go:172] (0xc002ed4210) Reply frame received for 5
I0203 22:49:40.496208       8 log.go:172] (0xc002ed4210) Data frame received for 3
I0203 22:49:40.496402       8 log.go:172] (0xc001078500) (3) Data frame handling
I0203 22:49:40.496522       8 log.go:172] (0xc001078500) (3) Data frame sent
I0203 22:49:40.650255       8 log.go:172] (0xc002ed4210) (0xc001078500) Stream removed, broadcasting: 3
I0203 22:49:40.650514       8 log.go:172] (0xc002ed4210) Data frame received for 1
I0203 22:49:40.650632       8 log.go:172] (0xc00131b180) (1) Data frame handling
I0203 22:49:40.650674       8 log.go:172] (0xc00131b180) (1) Data frame sent
I0203 22:49:40.650694       8 log.go:172] (0xc002ed4210) (0xc00131b180) Stream removed, broadcasting: 1
I0203 22:49:40.650796       8 log.go:172] (0xc002ed4210) (0xc000f29cc0) Stream removed, broadcasting: 5
I0203 22:49:40.650875       8 log.go:172] (0xc002ed4210) Go away received
I0203 22:49:40.651319       8 log.go:172] (0xc002ed4210) (0xc00131b180) Stream removed, broadcasting: 1
I0203 22:49:40.651338       8 log.go:172] (0xc002ed4210) (0xc001078500) Stream removed, broadcasting: 3
I0203 22:49:40.651359       8 log.go:172] (0xc002ed4210) (0xc000f29cc0) Stream removed, broadcasting: 5
Feb  3 22:49:40.651: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  3 22:49:40.651: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:40.651: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:40.698689       8 log.go:172] (0xc002661b80) (0xc001158140) Create stream
I0203 22:49:40.698860       8 log.go:172] (0xc002661b80) (0xc001158140) Stream added, broadcasting: 1
I0203 22:49:40.704126       8 log.go:172] (0xc002661b80) Reply frame received for 1
I0203 22:49:40.704171       8 log.go:172] (0xc002661b80) (0xc00111d4a0) Create stream
I0203 22:49:40.704183       8 log.go:172] (0xc002661b80) (0xc00111d4a0) Stream added, broadcasting: 3
I0203 22:49:40.705192       8 log.go:172] (0xc002661b80) Reply frame received for 3
I0203 22:49:40.705226       8 log.go:172] (0xc002661b80) (0xc001158500) Create stream
I0203 22:49:40.705233       8 log.go:172] (0xc002661b80) (0xc001158500) Stream added, broadcasting: 5
I0203 22:49:40.706343       8 log.go:172] (0xc002661b80) Reply frame received for 5
I0203 22:49:40.811484       8 log.go:172] (0xc002661b80) Data frame received for 3
I0203 22:49:40.811757       8 log.go:172] (0xc00111d4a0) (3) Data frame handling
I0203 22:49:40.812002       8 log.go:172] (0xc00111d4a0) (3) Data frame sent
I0203 22:49:40.902908       8 log.go:172] (0xc002661b80) (0xc00111d4a0) Stream removed, broadcasting: 3
I0203 22:49:40.903064       8 log.go:172] (0xc002661b80) Data frame received for 1
I0203 22:49:40.903095       8 log.go:172] (0xc001158140) (1) Data frame handling
I0203 22:49:40.903122       8 log.go:172] (0xc002661b80) (0xc001158500) Stream removed, broadcasting: 5
I0203 22:49:40.903173       8 log.go:172] (0xc001158140) (1) Data frame sent
I0203 22:49:40.903182       8 log.go:172] (0xc002661b80) (0xc001158140) Stream removed, broadcasting: 1
I0203 22:49:40.903384       8 log.go:172] (0xc002661b80) Go away received
I0203 22:49:40.903529       8 log.go:172] (0xc002661b80) (0xc001158140) Stream removed, broadcasting: 1
I0203 22:49:40.903568       8 log.go:172] (0xc002661b80) (0xc00111d4a0) Stream removed, broadcasting: 3
I0203 22:49:40.903580       8 log.go:172] (0xc002661b80) (0xc001158500) Stream removed, broadcasting: 5
Feb  3 22:49:40.903: INFO: Exec stderr: ""
Feb  3 22:49:40.903: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:40.903: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:40.945896       8 log.go:172] (0xc00302e2c0) (0xc001158780) Create stream
I0203 22:49:40.945970       8 log.go:172] (0xc00302e2c0) (0xc001158780) Stream added, broadcasting: 1
I0203 22:49:40.949868       8 log.go:172] (0xc00302e2c0) Reply frame received for 1
I0203 22:49:40.949895       8 log.go:172] (0xc00302e2c0) (0xc001158820) Create stream
I0203 22:49:40.949907       8 log.go:172] (0xc00302e2c0) (0xc001158820) Stream added, broadcasting: 3
I0203 22:49:40.951017       8 log.go:172] (0xc00302e2c0) Reply frame received for 3
I0203 22:49:40.951035       8 log.go:172] (0xc00302e2c0) (0xc001158960) Create stream
I0203 22:49:40.951042       8 log.go:172] (0xc00302e2c0) (0xc001158960) Stream added, broadcasting: 5
I0203 22:49:40.952114       8 log.go:172] (0xc00302e2c0) Reply frame received for 5
I0203 22:49:41.023382       8 log.go:172] (0xc00302e2c0) Data frame received for 3
I0203 22:49:41.023430       8 log.go:172] (0xc001158820) (3) Data frame handling
I0203 22:49:41.023446       8 log.go:172] (0xc001158820) (3) Data frame sent
I0203 22:49:41.097887       8 log.go:172] (0xc00302e2c0) Data frame received for 1
I0203 22:49:41.097951       8 log.go:172] (0xc00302e2c0) (0xc001158960) Stream removed, broadcasting: 5
I0203 22:49:41.097992       8 log.go:172] (0xc001158780) (1) Data frame handling
I0203 22:49:41.098007       8 log.go:172] (0xc001158780) (1) Data frame sent
I0203 22:49:41.098050       8 log.go:172] (0xc00302e2c0) (0xc001158820) Stream removed, broadcasting: 3
I0203 22:49:41.098113       8 log.go:172] (0xc00302e2c0) (0xc001158780) Stream removed, broadcasting: 1
I0203 22:49:41.098133       8 log.go:172] (0xc00302e2c0) Go away received
I0203 22:49:41.098520       8 log.go:172] (0xc00302e2c0) (0xc001158780) Stream removed, broadcasting: 1
I0203 22:49:41.098632       8 log.go:172] (0xc00302e2c0) (0xc001158820) Stream removed, broadcasting: 3
I0203 22:49:41.098653       8 log.go:172] (0xc00302e2c0) (0xc001158960) Stream removed, broadcasting: 5
Feb  3 22:49:41.098: INFO: Exec stderr: ""
Feb  3 22:49:41.098: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:41.098: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:41.148296       8 log.go:172] (0xc00259b340) (0xc001078a00) Create stream
I0203 22:49:41.148335       8 log.go:172] (0xc00259b340) (0xc001078a00) Stream added, broadcasting: 1
I0203 22:49:41.151815       8 log.go:172] (0xc00259b340) Reply frame received for 1
I0203 22:49:41.151876       8 log.go:172] (0xc00259b340) (0xc00131bae0) Create stream
I0203 22:49:41.151895       8 log.go:172] (0xc00259b340) (0xc00131bae0) Stream added, broadcasting: 3
I0203 22:49:41.153693       8 log.go:172] (0xc00259b340) Reply frame received for 3
I0203 22:49:41.153852       8 log.go:172] (0xc00259b340) (0xc00111d720) Create stream
I0203 22:49:41.153862       8 log.go:172] (0xc00259b340) (0xc00111d720) Stream added, broadcasting: 5
I0203 22:49:41.155599       8 log.go:172] (0xc00259b340) Reply frame received for 5
I0203 22:49:41.225179       8 log.go:172] (0xc00259b340) Data frame received for 3
I0203 22:49:41.225265       8 log.go:172] (0xc00131bae0) (3) Data frame handling
I0203 22:49:41.225294       8 log.go:172] (0xc00131bae0) (3) Data frame sent
I0203 22:49:41.302653       8 log.go:172] (0xc00259b340) Data frame received for 1
I0203 22:49:41.303042       8 log.go:172] (0xc00259b340) (0xc00131bae0) Stream removed, broadcasting: 3
I0203 22:49:41.303192       8 log.go:172] (0xc001078a00) (1) Data frame handling
I0203 22:49:41.303306       8 log.go:172] (0xc001078a00) (1) Data frame sent
I0203 22:49:41.303467       8 log.go:172] (0xc00259b340) (0xc00111d720) Stream removed, broadcasting: 5
I0203 22:49:41.303569       8 log.go:172] (0xc00259b340) (0xc001078a00) Stream removed, broadcasting: 1
I0203 22:49:41.303653       8 log.go:172] (0xc00259b340) Go away received
I0203 22:49:41.303958       8 log.go:172] (0xc00259b340) (0xc001078a00) Stream removed, broadcasting: 1
I0203 22:49:41.303982       8 log.go:172] (0xc00259b340) (0xc00131bae0) Stream removed, broadcasting: 3
I0203 22:49:41.304001       8 log.go:172] (0xc00259b340) (0xc00111d720) Stream removed, broadcasting: 5
Feb  3 22:49:41.304: INFO: Exec stderr: ""
Feb  3 22:49:41.304: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5914 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 22:49:41.304: INFO: >>> kubeConfig: /root/.kube/config
I0203 22:49:41.343193       8 log.go:172] (0xc00259b970) (0xc001078fa0) Create stream
I0203 22:49:41.343273       8 log.go:172] (0xc00259b970) (0xc001078fa0) Stream added, broadcasting: 1
I0203 22:49:41.346357       8 log.go:172] (0xc00259b970) Reply frame received for 1
I0203 22:49:41.346417       8 log.go:172] (0xc00259b970) (0xc000d78500) Create stream
I0203 22:49:41.346424       8 log.go:172] (0xc00259b970) (0xc000d78500) Stream added, broadcasting: 3
I0203 22:49:41.347430       8 log.go:172] (0xc00259b970) Reply frame received for 3
I0203 22:49:41.347453       8 log.go:172] (0xc00259b970) (0xc00131be00) Create stream
I0203 22:49:41.347461       8 log.go:172] (0xc00259b970) (0xc00131be00) Stream added, broadcasting: 5
I0203 22:49:41.348542       8 log.go:172] (0xc00259b970) Reply frame received for 5
I0203 22:49:41.402685       8 log.go:172] (0xc00259b970) Data frame received for 3
I0203 22:49:41.402731       8 log.go:172] (0xc000d78500) (3) Data frame handling
I0203 22:49:41.402757       8 log.go:172] (0xc000d78500) (3) Data frame sent
I0203 22:49:41.463787       8 log.go:172] (0xc00259b970) Data frame received for 1
I0203 22:49:41.463932       8 log.go:172] (0xc00259b970) (0xc000d78500) Stream removed, broadcasting: 3
I0203 22:49:41.463974       8 log.go:172] (0xc001078fa0) (1) Data frame handling
I0203 22:49:41.463990       8 log.go:172] (0xc001078fa0) (1) Data frame sent
I0203 22:49:41.464029       8 log.go:172] (0xc00259b970) (0xc00131be00) Stream removed, broadcasting: 5
I0203 22:49:41.464058       8 log.go:172] (0xc00259b970) (0xc001078fa0) Stream removed, broadcasting: 1
I0203 22:49:41.464076       8 log.go:172] (0xc00259b970) Go away received
I0203 22:49:41.464684       8 log.go:172] (0xc00259b970) (0xc001078fa0) Stream removed, broadcasting: 1
I0203 22:49:41.464722       8 log.go:172] (0xc00259b970) (0xc000d78500) Stream removed, broadcasting: 3
I0203 22:49:41.464748       8 log.go:172] (0xc00259b970) (0xc00131be00) Stream removed, broadcasting: 5
Feb  3 22:49:41.464: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:49:41.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5914" for this suite.

• [SLOW TEST:20.646 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4270,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:49:41.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4037
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4037
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4037
Feb  3 22:49:41.716: INFO: Found 0 stateful pods, waiting for 1
Feb  3 22:49:51.732: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  3 22:49:51.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4037 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:49:52.167: INFO: stderr: "I0203 22:49:51.926349    4330 log.go:172] (0xc000570000) (0xc0006828c0) Create stream\nI0203 22:49:51.926504    4330 log.go:172] (0xc000570000) (0xc0006828c0) Stream added, broadcasting: 1\nI0203 22:49:51.930200    4330 log.go:172] (0xc000570000) Reply frame received for 1\nI0203 22:49:51.930284    4330 log.go:172] (0xc000570000) (0xc0007135e0) Create stream\nI0203 22:49:51.930300    4330 log.go:172] (0xc000570000) (0xc0007135e0) Stream added, broadcasting: 3\nI0203 22:49:51.931352    4330 log.go:172] (0xc000570000) Reply frame received for 3\nI0203 22:49:51.931373    4330 log.go:172] (0xc000570000) (0xc000713680) Create stream\nI0203 22:49:51.931380    4330 log.go:172] (0xc000570000) (0xc000713680) Stream added, broadcasting: 5\nI0203 22:49:51.932350    4330 log.go:172] (0xc000570000) Reply frame received for 5\nI0203 22:49:52.016794    4330 log.go:172] (0xc000570000) Data frame received for 5\nI0203 22:49:52.016870    4330 log.go:172] (0xc000713680) (5) Data frame handling\nI0203 22:49:52.016901    4330 log.go:172] (0xc000713680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:49:52.049594    4330 log.go:172] (0xc000570000) Data frame received for 3\nI0203 22:49:52.049627    4330 log.go:172] (0xc0007135e0) (3) Data frame handling\nI0203 22:49:52.049653    4330 log.go:172] (0xc0007135e0) (3) Data frame sent\nI0203 22:49:52.148195    4330 log.go:172] (0xc000570000) (0xc000713680) Stream removed, broadcasting: 5\nI0203 22:49:52.148421    4330 log.go:172] (0xc000570000) Data frame received for 1\nI0203 22:49:52.148439    4330 log.go:172] (0xc0006828c0) (1) Data frame handling\nI0203 22:49:52.148482    4330 log.go:172] (0xc0006828c0) (1) Data frame sent\nI0203 22:49:52.148494    4330 log.go:172] (0xc000570000) (0xc0006828c0) Stream removed, broadcasting: 1\nI0203 22:49:52.149698    4330 log.go:172] (0xc000570000) (0xc0007135e0) Stream removed, broadcasting: 3\nI0203 22:49:52.149818    4330 log.go:172] (0xc000570000) (0xc0006828c0) Stream removed, broadcasting: 1\nI0203 22:49:52.149845    4330 log.go:172] (0xc000570000) (0xc0007135e0) Stream removed, broadcasting: 3\nI0203 22:49:52.149860    4330 log.go:172] (0xc000570000) (0xc000713680) Stream removed, broadcasting: 5\nI0203 22:49:52.150299    4330 log.go:172] (0xc000570000) Go away received\n"
Feb  3 22:49:52.168: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:49:52.168: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:49:52.174: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  3 22:50:02.181: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:50:02.181: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:50:02.230: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999778s
Feb  3 22:50:03.237: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.961718827s
Feb  3 22:50:04.245: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.954724428s
Feb  3 22:50:05.250: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.946459739s
Feb  3 22:50:06.258: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.941117889s
Feb  3 22:50:07.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.93299907s
Feb  3 22:50:08.276: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.926992508s
Feb  3 22:50:09.284: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.914916518s
Feb  3 22:50:10.301: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.906906215s
Feb  3 22:50:11.313: INFO: Verifying statefulset ss doesn't scale past 1 for another 890.05815ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4037
Feb  3 22:50:12.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4037 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:50:12.737: INFO: stderr: "I0203 22:50:12.558084    4349 log.go:172] (0xc0007420b0) (0xc0007221e0) Create stream\nI0203 22:50:12.558385    4349 log.go:172] (0xc0007420b0) (0xc0007221e0) Stream added, broadcasting: 1\nI0203 22:50:12.564073    4349 log.go:172] (0xc0007420b0) Reply frame received for 1\nI0203 22:50:12.564146    4349 log.go:172] (0xc0007420b0) (0xc000a7e000) Create stream\nI0203 22:50:12.564195    4349 log.go:172] (0xc0007420b0) (0xc000a7e000) Stream added, broadcasting: 3\nI0203 22:50:12.565578    4349 log.go:172] (0xc0007420b0) Reply frame received for 3\nI0203 22:50:12.565633    4349 log.go:172] (0xc0007420b0) (0xc0005f2820) Create stream\nI0203 22:50:12.565646    4349 log.go:172] (0xc0007420b0) (0xc0005f2820) Stream added, broadcasting: 5\nI0203 22:50:12.568713    4349 log.go:172] (0xc0007420b0) Reply frame received for 5\nI0203 22:50:12.659152    4349 log.go:172] (0xc0007420b0) Data frame received for 5\nI0203 22:50:12.659313    4349 log.go:172] (0xc0005f2820) (5) Data frame handling\nI0203 22:50:12.659339    4349 log.go:172] (0xc0005f2820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:50:12.659375    4349 log.go:172] (0xc0007420b0) Data frame received for 3\nI0203 22:50:12.659394    4349 log.go:172] (0xc000a7e000) (3) Data frame handling\nI0203 22:50:12.659409    4349 log.go:172] (0xc000a7e000) (3) Data frame sent\nI0203 22:50:12.726898    4349 log.go:172] (0xc0007420b0) Data frame received for 1\nI0203 22:50:12.727006    4349 log.go:172] (0xc0007221e0) (1) Data frame handling\nI0203 22:50:12.727029    4349 log.go:172] (0xc0007221e0) (1) Data frame sent\nI0203 22:50:12.727327    4349 log.go:172] (0xc0007420b0) (0xc0007221e0) Stream removed, broadcasting: 1\nI0203 22:50:12.727505    4349 log.go:172] (0xc0007420b0) (0xc000a7e000) Stream removed, broadcasting: 3\nI0203 22:50:12.727607    4349 log.go:172] (0xc0007420b0) (0xc0005f2820) Stream removed, broadcasting: 5\nI0203 22:50:12.727658    4349 log.go:172] (0xc0007420b0) Go away received\nI0203 22:50:12.727840    4349 log.go:172] (0xc0007420b0) (0xc0007221e0) Stream removed, broadcasting: 1\nI0203 22:50:12.727880    4349 log.go:172] (0xc0007420b0) (0xc000a7e000) Stream removed, broadcasting: 3\nI0203 22:50:12.727902    4349 log.go:172] (0xc0007420b0) (0xc0005f2820) Stream removed, broadcasting: 5\n"
Feb  3 22:50:12.738: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:50:12.738: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:50:12.743: INFO: Found 1 stateful pods, waiting for 3
Feb  3 22:50:22.749: INFO: Found 2 stateful pods, waiting for 3
Feb  3 22:50:32.762: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:50:32.762: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:50:32.762: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  3 22:50:32.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4037 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:50:33.148: INFO: stderr: "I0203 22:50:32.980805    4368 log.go:172] (0xc0003f2000) (0xc00090c000) Create stream\nI0203 22:50:32.981025    4368 log.go:172] (0xc0003f2000) (0xc00090c000) Stream added, broadcasting: 1\nI0203 22:50:32.985804    4368 log.go:172] (0xc0003f2000) Reply frame received for 1\nI0203 22:50:32.985828    4368 log.go:172] (0xc0003f2000) (0xc0005c9cc0) Create stream\nI0203 22:50:32.985833    4368 log.go:172] (0xc0003f2000) (0xc0005c9cc0) Stream added, broadcasting: 3\nI0203 22:50:32.987341    4368 log.go:172] (0xc0003f2000) Reply frame received for 3\nI0203 22:50:32.987362    4368 log.go:172] (0xc0003f2000) (0xc0008f88c0) Create stream\nI0203 22:50:32.987369    4368 log.go:172] (0xc0003f2000) (0xc0008f88c0) Stream added, broadcasting: 5\nI0203 22:50:32.991511    4368 log.go:172] (0xc0003f2000) Reply frame received for 5\nI0203 22:50:33.069449    4368 log.go:172] (0xc0003f2000) Data frame received for 5\nI0203 22:50:33.069515    4368 log.go:172] (0xc0008f88c0) (5) Data frame handling\nI0203 22:50:33.069550    4368 log.go:172] (0xc0008f88c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:50:33.069609    4368 log.go:172] (0xc0003f2000) Data frame received for 3\nI0203 22:50:33.069622    4368 log.go:172] (0xc0005c9cc0) (3) Data frame handling\nI0203 22:50:33.069642    4368 log.go:172] (0xc0005c9cc0) (3) Data frame sent\nI0203 22:50:33.136065    4368 log.go:172] (0xc0003f2000) Data frame received for 1\nI0203 22:50:33.136126    4368 log.go:172] (0xc0003f2000) (0xc0005c9cc0) Stream removed, broadcasting: 3\nI0203 22:50:33.136208    4368 log.go:172] (0xc00090c000) (1) Data frame handling\nI0203 22:50:33.136346    4368 log.go:172] (0xc00090c000) (1) Data frame sent\nI0203 22:50:33.136392    4368 log.go:172] (0xc0003f2000) (0xc0008f88c0) Stream removed, broadcasting: 5\nI0203 22:50:33.136439    4368 log.go:172] (0xc0003f2000) (0xc00090c000) Stream removed, broadcasting: 1\nI0203 22:50:33.136466    4368 log.go:172] (0xc0003f2000) Go away received\nI0203 22:50:33.137434    4368 log.go:172] (0xc0003f2000) (0xc00090c000) Stream removed, broadcasting: 1\nI0203 22:50:33.137459    4368 log.go:172] (0xc0003f2000) (0xc0005c9cc0) Stream removed, broadcasting: 3\nI0203 22:50:33.137476    4368 log.go:172] (0xc0003f2000) (0xc0008f88c0) Stream removed, broadcasting: 5\n"
Feb  3 22:50:33.148: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:50:33.148: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:50:33.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4037 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:50:33.674: INFO: stderr: "I0203 22:50:33.448398    4385 log.go:172] (0xc0000f4370) (0xc00091a0a0) Create stream\nI0203 22:50:33.448564    4385 log.go:172] (0xc0000f4370) (0xc00091a0a0) Stream added, broadcasting: 1\nI0203 22:50:33.453039    4385 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0203 22:50:33.453073    4385 log.go:172] (0xc0000f4370) (0xc000976000) Create stream\nI0203 22:50:33.453081    4385 log.go:172] (0xc0000f4370) (0xc000976000) Stream added, broadcasting: 3\nI0203 22:50:33.454709    4385 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0203 22:50:33.454740    4385 log.go:172] (0xc0000f4370) (0xc0006e99a0) Create stream\nI0203 22:50:33.454753    4385 log.go:172] (0xc0000f4370) (0xc0006e99a0) Stream added, broadcasting: 5\nI0203 22:50:33.457687    4385 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0203 22:50:33.537128    4385 log.go:172] (0xc0000f4370) Data frame received for 5\nI0203 22:50:33.537162    4385 log.go:172] (0xc0006e99a0) (5) Data frame handling\nI0203 22:50:33.537185    4385 log.go:172] (0xc0006e99a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:50:33.571035    4385 log.go:172] (0xc0000f4370) Data frame received for 3\nI0203 22:50:33.571065    4385 log.go:172] (0xc000976000) (3) Data frame handling\nI0203 22:50:33.571083    4385 log.go:172] (0xc000976000) (3) Data frame sent\nI0203 22:50:33.652772    4385 log.go:172] (0xc0000f4370) (0xc000976000) Stream removed, broadcasting: 3\nI0203 22:50:33.653077    4385 log.go:172] (0xc0000f4370) (0xc0006e99a0) Stream removed, broadcasting: 5\nI0203 22:50:33.653436    4385 log.go:172] (0xc0000f4370) Data frame received for 1\nI0203 22:50:33.653602    4385 log.go:172] (0xc00091a0a0) (1) Data frame handling\nI0203 22:50:33.653671    4385 log.go:172] (0xc00091a0a0) (1) Data frame sent\nI0203 22:50:33.653714    4385 log.go:172] (0xc0000f4370) (0xc00091a0a0) Stream removed, broadcasting: 1\nI0203 22:50:33.653760    4385 log.go:172] (0xc0000f4370) Go away received\nI0203 22:50:33.655383    4385 log.go:172] (0xc0000f4370) (0xc00091a0a0) Stream removed, broadcasting: 1\nI0203 22:50:33.655507    4385 log.go:172] (0xc0000f4370) (0xc000976000) Stream removed, broadcasting: 3\nI0203 22:50:33.655613    4385 log.go:172] (0xc0000f4370) (0xc0006e99a0) Stream removed, broadcasting: 5\n"
Feb  3 22:50:33.675: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:50:33.675: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:50:33.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4037 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:50:34.150: INFO: stderr: "I0203 22:50:33.853813    4404 log.go:172] (0xc0009891e0) (0xc0009365a0) Create stream\nI0203 22:50:33.854607    4404 log.go:172] (0xc0009891e0) (0xc0009365a0) Stream added, broadcasting: 1\nI0203 22:50:33.866583    4404 log.go:172] (0xc0009891e0) Reply frame received for 1\nI0203 22:50:33.866695    4404 log.go:172] (0xc0009891e0) (0xc0005f65a0) Create stream\nI0203 22:50:33.866708    4404 log.go:172] (0xc0009891e0) (0xc0005f65a0) Stream added, broadcasting: 3\nI0203 22:50:33.868134    4404 log.go:172] (0xc0009891e0) Reply frame received for 3\nI0203 22:50:33.868155    4404 log.go:172] (0xc0009891e0) (0xc000387360) Create stream\nI0203 22:50:33.868163    4404 log.go:172] (0xc0009891e0) (0xc000387360) Stream added, broadcasting: 5\nI0203 22:50:33.869575    4404 log.go:172] (0xc0009891e0) Reply frame received for 5\nI0203 22:50:33.986691    4404 log.go:172] (0xc0009891e0) Data frame received for 5\nI0203 22:50:33.986972    4404 log.go:172] (0xc000387360) (5) Data frame handling\nI0203 22:50:33.987037    4404 log.go:172] (0xc000387360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:50:34.032367    4404 log.go:172] (0xc0009891e0) Data frame received for 3\nI0203 22:50:34.032431    4404 log.go:172] (0xc0005f65a0) (3) Data frame handling\nI0203 22:50:34.032455    4404 log.go:172] (0xc0005f65a0) (3) Data frame sent\nI0203 22:50:34.138143    4404 log.go:172] (0xc0009891e0) (0xc0005f65a0) Stream removed, broadcasting: 3\nI0203 22:50:34.138293    4404 log.go:172] (0xc0009891e0) Data frame received for 1\nI0203 22:50:34.138314    4404 log.go:172] (0xc0009365a0) (1) Data frame handling\nI0203 22:50:34.138342    4404 log.go:172] (0xc0009891e0) (0xc000387360) Stream removed, broadcasting: 5\nI0203 22:50:34.138407    4404 log.go:172] (0xc0009365a0) (1) Data frame sent\nI0203 22:50:34.138564    4404 log.go:172] (0xc0009891e0) (0xc0009365a0) Stream removed, broadcasting: 1\nI0203 22:50:34.138709    4404 log.go:172] (0xc0009891e0) Go away received\nI0203 22:50:34.140514    4404 log.go:172] (0xc0009891e0) (0xc0009365a0) Stream removed, broadcasting: 1\nI0203 22:50:34.140577    4404 log.go:172] (0xc0009891e0) (0xc0005f65a0) Stream removed, broadcasting: 3\nI0203 22:50:34.140626    4404 log.go:172] (0xc0009891e0) (0xc000387360) Stream removed, broadcasting: 5\n"
Feb  3 22:50:34.151: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:50:34.151: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:50:34.151: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:50:34.219: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  3 22:50:44.236: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:50:44.236: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:50:44.236: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:50:44.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999648s
Feb  3 22:50:45.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993962704s
Feb  3 22:50:46.576: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.682753402s
Feb  3 22:50:47.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.671779782s
Feb  3 22:50:48.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.662441922s
Feb  3 22:50:49.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.632657974s
Feb  3 22:50:50.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.399462604s
Feb  3 22:50:51.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.386440486s
Feb  3 22:50:52.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.375242814s
Feb  3 22:50:54.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 362.733181ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4037
Feb  3 22:50:55.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4037 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:50:55.547: INFO: stderr: "I0203 22:50:55.332081    4423 log.go:172] (0xc000b74000) (0xc000946000) Create stream\nI0203 22:50:55.332200    4423 log.go:172] (0xc000b74000) (0xc000946000) Stream added, broadcasting: 1\nI0203 22:50:55.335073    4423 log.go:172] (0xc000b74000) Reply frame received for 1\nI0203 22:50:55.335320    4423 log.go:172] (0xc000b74000) (0xc0009460a0) Create stream\nI0203 22:50:55.335356    4423 log.go:172] (0xc000b74000) (0xc0009460a0) Stream added, broadcasting: 3\nI0203 22:50:55.336726    4423 log.go:172] (0xc000b74000) Reply frame received for 3\nI0203 22:50:55.336764    4423 log.go:172] (0xc000b74000) (0xc0005d95e0) Create stream\nI0203 22:50:55.336773    4423 log.go:172] (0xc000b74000) (0xc0005d95e0) Stream added, broadcasting: 5\nI0203 22:50:55.338260    4423 log.go:172] (0xc000b74000) Reply frame received for 5\nI0203 22:50:55.428723    4423 log.go:172] (0xc000b74000) Data frame received for 5\nI0203 22:50:55.428778    4423 log.go:172] (0xc0005d95e0) (5) Data frame handling\nI0203 22:50:55.428805    4423 log.go:172] (0xc0005d95e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:50:55.430845    4423 log.go:172] (0xc000b74000) Data frame received for 3\nI0203 22:50:55.430875    4423 log.go:172] (0xc0009460a0) (3) Data frame handling\nI0203 22:50:55.430887    4423 log.go:172] (0xc0009460a0) (3) Data frame sent\nI0203 22:50:55.533510    4423 log.go:172] (0xc000b74000) Data frame received for 1\nI0203 22:50:55.533653    4423 log.go:172] (0xc000946000) (1) Data frame handling\nI0203 22:50:55.533688    4423 log.go:172] (0xc000946000) (1) Data frame sent\nI0203 22:50:55.534237    4423 log.go:172] (0xc000b74000) (0xc000946000) Stream removed, broadcasting: 1\nI0203 22:50:55.535414    4423 log.go:172] (0xc000b74000) (0xc0009460a0) Stream removed, broadcasting: 3\nI0203 22:50:55.535653    4423 log.go:172] (0xc000b74000) (0xc0005d95e0) Stream removed, broadcasting: 5\nI0203 22:50:55.535704    4423 log.go:172] (0xc000b74000) (0xc000946000) Stream removed, broadcasting: 1\nI0203 22:50:55.535717    4423 log.go:172] (0xc000b74000) (0xc0009460a0) Stream removed, broadcasting: 3\nI0203 22:50:55.535725    4423 log.go:172] (0xc000b74000) (0xc0005d95e0) Stream removed, broadcasting: 5\n"
Feb  3 22:50:55.547: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:50:55.547: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:50:55.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4037 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:50:55.910: INFO: stderr: "I0203 22:50:55.728154    4446 log.go:172] (0xc000bc42c0) (0xc0008f8280) Create stream\nI0203 22:50:55.728287    4446 log.go:172] (0xc000bc42c0) (0xc0008f8280) Stream added, broadcasting: 1\nI0203 22:50:55.731318    4446 log.go:172] (0xc000bc42c0) Reply frame received for 1\nI0203 22:50:55.731362    4446 log.go:172] (0xc000bc42c0) (0xc000ac80a0) Create stream\nI0203 22:50:55.731383    4446 log.go:172] (0xc000bc42c0) (0xc000ac80a0) Stream added, broadcasting: 3\nI0203 22:50:55.733315    4446 log.go:172] (0xc000bc42c0) Reply frame received for 3\nI0203 22:50:55.733999    4446 log.go:172] (0xc000bc42c0) (0xc000bbc140) Create stream\nI0203 22:50:55.734121    4446 log.go:172] (0xc000bc42c0) (0xc000bbc140) Stream added, broadcasting: 5\nI0203 22:50:55.746053    4446 log.go:172] (0xc000bc42c0) Reply frame received for 5\nI0203 22:50:55.823735    4446 log.go:172] (0xc000bc42c0) Data frame received for 5\nI0203 22:50:55.823810    4446 log.go:172] (0xc000bbc140) (5) Data frame handling\nI0203 22:50:55.823863    4446 log.go:172] (0xc000bbc140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:50:55.824497    4446 log.go:172] (0xc000bc42c0) Data frame received for 3\nI0203 22:50:55.824531    4446 log.go:172] (0xc000ac80a0) (3) Data frame handling\nI0203 22:50:55.824547    4446 log.go:172] (0xc000ac80a0) (3) Data frame sent\nI0203 22:50:55.898978    4446 log.go:172] (0xc000bc42c0) Data frame received for 1\nI0203 22:50:55.899035    4446 log.go:172] (0xc000bc42c0) (0xc000ac80a0) Stream removed, broadcasting: 3\nI0203 22:50:55.899106    4446 log.go:172] (0xc0008f8280) (1) Data frame handling\nI0203 22:50:55.899132    4446 log.go:172] (0xc0008f8280) (1) Data frame sent\nI0203 22:50:55.899162    4446 log.go:172] (0xc000bc42c0) (0xc0008f8280) Stream removed, broadcasting: 1\nI0203 22:50:55.899201    4446 log.go:172] (0xc000bc42c0) (0xc000bbc140) Stream removed, broadcasting: 5\nI0203 22:50:55.900250    4446 log.go:172] (0xc000bc42c0) (0xc0008f8280) Stream removed, broadcasting: 1\nI0203 22:50:55.900288    4446 log.go:172] (0xc000bc42c0) (0xc000ac80a0) Stream removed, broadcasting: 3\nI0203 22:50:55.900302    4446 log.go:172] (0xc000bc42c0) (0xc000bbc140) Stream removed, broadcasting: 5\nI0203 22:50:55.900392    4446 log.go:172] (0xc000bc42c0) Go away received\n"
Feb  3 22:50:55.911: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:50:55.911: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:50:55.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4037 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:50:56.245: INFO: stderr: "I0203 22:50:56.074596    4465 log.go:172] (0xc000b66b00) (0xc000a8c5a0) Create stream\nI0203 22:50:56.074671    4465 log.go:172] (0xc000b66b00) (0xc000a8c5a0) Stream added, broadcasting: 1\nI0203 22:50:56.089839    4465 log.go:172] (0xc000b66b00) Reply frame received for 1\nI0203 22:50:56.089876    4465 log.go:172] (0xc000b66b00) (0xc0006d3b80) Create stream\nI0203 22:50:56.089889    4465 log.go:172] (0xc000b66b00) (0xc0006d3b80) Stream added, broadcasting: 3\nI0203 22:50:56.091929    4465 log.go:172] (0xc000b66b00) Reply frame received for 3\nI0203 22:50:56.091968    4465 log.go:172] (0xc000b66b00) (0xc000672780) Create stream\nI0203 22:50:56.091999    4465 log.go:172] (0xc000b66b00) (0xc000672780) Stream added, broadcasting: 5\nI0203 22:50:56.094112    4465 log.go:172] (0xc000b66b00) Reply frame received for 5\nI0203 22:50:56.163567    4465 log.go:172] (0xc000b66b00) Data frame received for 3\nI0203 22:50:56.163694    4465 log.go:172] (0xc0006d3b80) (3) Data frame handling\nI0203 22:50:56.163776    4465 log.go:172] (0xc0006d3b80) (3) Data frame sent\nI0203 22:50:56.163899    4465 log.go:172] (0xc000b66b00) Data frame received for 5\nI0203 22:50:56.163926    4465 log.go:172] (0xc000672780) (5) Data frame handling\nI0203 22:50:56.164053    4465 log.go:172] (0xc000672780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:50:56.229813    4465 log.go:172] (0xc000b66b00) Data frame received for 1\nI0203 22:50:56.229832    4465 log.go:172] (0xc000a8c5a0) (1) Data frame handling\nI0203 22:50:56.229855    4465 log.go:172] (0xc000a8c5a0) (1) Data frame sent\nI0203 22:50:56.230219    4465 log.go:172] (0xc000b66b00) (0xc000a8c5a0) Stream removed, broadcasting: 1\nI0203 22:50:56.230297    4465 log.go:172] (0xc000b66b00) (0xc0006d3b80) Stream removed, broadcasting: 3\nI0203 22:50:56.231834    4465 log.go:172] (0xc000b66b00) (0xc000672780) Stream removed, broadcasting: 5\nI0203 22:50:56.231929    4465 log.go:172] (0xc000b66b00) Go away received\nI0203 22:50:56.232058    4465 log.go:172] (0xc000b66b00) (0xc000a8c5a0) Stream removed, broadcasting: 1\nI0203 22:50:56.232075    4465 log.go:172] (0xc000b66b00) (0xc0006d3b80) Stream removed, broadcasting: 3\nI0203 22:50:56.232086    4465 log.go:172] (0xc000b66b00) (0xc000672780) Stream removed, broadcasting: 5\n"
Feb  3 22:50:56.245: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:50:56.245: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:50:56.245: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  3 22:51:26.273: INFO: Deleting all statefulset in ns statefulset-4037
Feb  3 22:51:26.280: INFO: Scaling statefulset ss to 0
Feb  3 22:51:26.293: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:51:26.296: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:51:26.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4037" for this suite.

• [SLOW TEST:104.829 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":267,"skipped":4274,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:51:26.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3407
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3407
STEP: creating replication controller externalsvc in namespace services-3407
I0203 22:51:26.588162       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3407, replica count: 2
I0203 22:51:29.639174       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:51:32.639953       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:51:35.640657       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 22:51:38.641072       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb  3 22:51:38.669: INFO: Creating new exec pod
Feb  3 22:51:46.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3407 execpodn6fw5 -- /bin/sh -x -c nslookup clusterip-service'
Feb  3 22:51:47.179: INFO: stderr: "I0203 22:51:46.957137    4484 log.go:172] (0xc000a46370) (0xc000a306e0) Create stream\nI0203 22:51:46.957272    4484 log.go:172] (0xc000a46370) (0xc000a306e0) Stream added, broadcasting: 1\nI0203 22:51:46.961465    4484 log.go:172] (0xc000a46370) Reply frame received for 1\nI0203 22:51:46.961622    4484 log.go:172] (0xc000a46370) (0xc00066bae0) Create stream\nI0203 22:51:46.961701    4484 log.go:172] (0xc000a46370) (0xc00066bae0) Stream added, broadcasting: 3\nI0203 22:51:46.963764    4484 log.go:172] (0xc000a46370) Reply frame received for 3\nI0203 22:51:46.963802    4484 log.go:172] (0xc000a46370) (0xc000a260a0) Create stream\nI0203 22:51:46.963818    4484 log.go:172] (0xc000a46370) (0xc000a260a0) Stream added, broadcasting: 5\nI0203 22:51:46.965754    4484 log.go:172] (0xc000a46370) Reply frame received for 5\nI0203 22:51:47.060425    4484 log.go:172] (0xc000a46370) Data frame received for 5\nI0203 22:51:47.061008    4484 log.go:172] (0xc000a260a0) (5) Data frame handling\nI0203 22:51:47.061077    4484 log.go:172] (0xc000a260a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0203 22:51:47.082437    4484 log.go:172] (0xc000a46370) Data frame received for 3\nI0203 22:51:47.082516    4484 log.go:172] (0xc00066bae0) (3) Data frame handling\nI0203 22:51:47.082645    4484 log.go:172] (0xc00066bae0) (3) Data frame sent\nI0203 22:51:47.082786    4484 log.go:172] (0xc000a46370) Data frame received for 3\nI0203 22:51:47.082810    4484 log.go:172] (0xc00066bae0) (3) Data frame handling\nI0203 22:51:47.082815    4484 log.go:172] (0xc00066bae0) (3) Data frame sent\nI0203 22:51:47.167315    4484 log.go:172] (0xc000a46370) Data frame received for 1\nI0203 22:51:47.167413    4484 log.go:172] (0xc000a46370) (0xc00066bae0) Stream removed, broadcasting: 3\nI0203 22:51:47.167566    4484 log.go:172] (0xc000a306e0) (1) Data frame handling\nI0203 22:51:47.167721    4484 log.go:172] (0xc000a306e0) (1) Data frame sent\nI0203 22:51:47.167893    4484 log.go:172] (0xc000a46370) (0xc000a260a0) Stream removed, broadcasting: 5\nI0203 22:51:47.168075    4484 log.go:172] (0xc000a46370) (0xc000a306e0) Stream removed, broadcasting: 1\nI0203 22:51:47.168131    4484 log.go:172] (0xc000a46370) Go away received\nI0203 22:51:47.170639    4484 log.go:172] (0xc000a46370) (0xc000a306e0) Stream removed, broadcasting: 1\nI0203 22:51:47.170746    4484 log.go:172] (0xc000a46370) (0xc00066bae0) Stream removed, broadcasting: 3\nI0203 22:51:47.170839    4484 log.go:172] (0xc000a46370) (0xc000a260a0) Stream removed, broadcasting: 5\n"
Feb  3 22:51:47.179: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3407.svc.cluster.local\tcanonical name = externalsvc.services-3407.svc.cluster.local.\nName:\texternalsvc.services-3407.svc.cluster.local\nAddress: 10.96.44.28\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3407, will wait for the garbage collector to delete the pods
Feb  3 22:51:47.245: INFO: Deleting ReplicationController externalsvc took: 10.179594ms
Feb  3 22:51:47.546: INFO: Terminating ReplicationController externalsvc pods took: 300.362941ms
Feb  3 22:52:02.388: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:52:02.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3407" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:36.112 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":268,"skipped":4293,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:52:02.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-2306/secret-test-5ccbcec6-4619-40a9-add9-4511cef5215c
STEP: Creating a pod to test consume secrets
Feb  3 22:52:02.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0" in namespace "secrets-2306" to be "success or failure"
Feb  3 22:52:02.639: INFO: Pod "pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.357929ms
Feb  3 22:52:04.644: INFO: Pod "pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026734415s
Feb  3 22:52:06.650: INFO: Pod "pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032989298s
Feb  3 22:52:08.661: INFO: Pod "pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043481522s
Feb  3 22:52:10.669: INFO: Pod "pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051346017s
Feb  3 22:52:12.674: INFO: Pod "pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056981464s
STEP: Saw pod success
Feb  3 22:52:12.674: INFO: Pod "pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0" satisfied condition "success or failure"
Feb  3 22:52:12.677: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0 container env-test: 
STEP: delete the pod
Feb  3 22:52:12.721: INFO: Waiting for pod pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0 to disappear
Feb  3 22:52:12.774: INFO: Pod pod-configmaps-9336d2da-e79f-4673-84d0-1597366253c0 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:52:12.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2306" for this suite.

• [SLOW TEST:10.318 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4302,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:52:12.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-2d614520-49ce-4fde-9573-47323aac8d73
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-2d614520-49ce-4fde-9573-47323aac8d73
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:53:48.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9220" for this suite.

• [SLOW TEST:95.883 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4331,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:53:48.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-92qh
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 22:53:48.846: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-92qh" in namespace "subpath-8706" to be "success or failure"
Feb  3 22:53:48.871: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Pending", Reason="", readiness=false. Elapsed: 24.723956ms
Feb  3 22:53:50.878: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03250956s
Feb  3 22:53:52.885: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039308674s
Feb  3 22:53:54.892: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046562542s
Feb  3 22:53:56.901: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05557615s
Feb  3 22:53:58.906: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 10.060226436s
Feb  3 22:54:00.939: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 12.093045416s
Feb  3 22:54:02.947: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 14.100805684s
Feb  3 22:54:04.957: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 16.111348576s
Feb  3 22:54:06.966: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 18.119855897s
Feb  3 22:54:08.972: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 20.126605497s
Feb  3 22:54:10.979: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 22.13334318s
Feb  3 22:54:12.986: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 24.140346214s
Feb  3 22:54:14.991: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 26.145250976s
Feb  3 22:54:17.001: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Running", Reason="", readiness=true. Elapsed: 28.15544362s
Feb  3 22:54:19.006: INFO: Pod "pod-subpath-test-secret-92qh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.160554928s
STEP: Saw pod success
Feb  3 22:54:19.006: INFO: Pod "pod-subpath-test-secret-92qh" satisfied condition "success or failure"
Feb  3 22:54:19.013: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-92qh container test-container-subpath-secret-92qh: 
STEP: delete the pod
Feb  3 22:54:19.102: INFO: Waiting for pod pod-subpath-test-secret-92qh to disappear
Feb  3 22:54:19.111: INFO: Pod pod-subpath-test-secret-92qh no longer exists
STEP: Deleting pod pod-subpath-test-secret-92qh
Feb  3 22:54:19.111: INFO: Deleting pod "pod-subpath-test-secret-92qh" in namespace "subpath-8706"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:54:19.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8706" for this suite.

• [SLOW TEST:30.445 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":271,"skipped":4343,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:54:19.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 22:54:19.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8" in namespace "downward-api-1922" to be "success or failure"
Feb  3 22:54:19.256: INFO: Pod "downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.753636ms
Feb  3 22:54:21.304: INFO: Pod "downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055800305s
Feb  3 22:54:23.310: INFO: Pod "downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061954066s
Feb  3 22:54:25.326: INFO: Pod "downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078276305s
Feb  3 22:54:27.339: INFO: Pod "downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091200347s
STEP: Saw pod success
Feb  3 22:54:27.339: INFO: Pod "downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8" satisfied condition "success or failure"
Feb  3 22:54:27.345: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8 container client-container: 
STEP: delete the pod
Feb  3 22:54:27.443: INFO: Waiting for pod downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8 to disappear
Feb  3 22:54:27.456: INFO: Pod downwardapi-volume-77e2a7c0-b499-463d-872e-4075e48558d8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:54:27.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1922" for this suite.

• [SLOW TEST:8.457 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4373,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:54:27.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-2eb02843-617d-454d-b51a-946fcc742acc
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:54:27.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1027" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":273,"skipped":4384,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:54:27.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  3 22:54:27.879: INFO: Waiting up to 5m0s for pod "downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91" in namespace "downward-api-7960" to be "success or failure"
Feb  3 22:54:27.885: INFO: Pod "downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91": Phase="Pending", Reason="", readiness=false. Elapsed: 5.877309ms
Feb  3 22:54:29.891: INFO: Pod "downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0117668s
Feb  3 22:54:31.903: INFO: Pod "downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024325015s
Feb  3 22:54:33.959: INFO: Pod "downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080472171s
Feb  3 22:54:35.969: INFO: Pod "downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090560554s
Feb  3 22:54:37.978: INFO: Pod "downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098751002s
STEP: Saw pod success
Feb  3 22:54:37.978: INFO: Pod "downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91" satisfied condition "success or failure"
Feb  3 22:54:37.983: INFO: Trying to get logs from node jerma-node pod downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91 container dapi-container: 
STEP: delete the pod
Feb  3 22:54:38.169: INFO: Waiting for pod downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91 to disappear
Feb  3 22:54:38.178: INFO: Pod downward-api-9d826a77-a40a-4ca9-b6e8-c9aafbebca91 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:54:38.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7960" for this suite.

• [SLOW TEST:10.472 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4419,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:54:38.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:54:38.251: INFO: Creating deployment "webserver-deployment"
Feb  3 22:54:38.257: INFO: Waiting for observed generation 1
Feb  3 22:54:40.574: INFO: Waiting for all required pods to come up
Feb  3 22:54:41.202: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  3 22:55:05.541: INFO: Waiting for deployment "webserver-deployment" to complete
Feb  3 22:55:05.553: INFO: Updating deployment "webserver-deployment" with a non-existent image
Feb  3 22:55:05.564: INFO: Updating deployment webserver-deployment
Feb  3 22:55:05.564: INFO: Waiting for observed generation 2
Feb  3 22:55:08.632: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  3 22:55:09.485: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  3 22:55:09.519: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb  3 22:55:09.708: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  3 22:55:09.708: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  3 22:55:09.764: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb  3 22:55:09.771: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Feb  3 22:55:09.771: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Feb  3 22:55:09.787: INFO: Updating deployment webserver-deployment
Feb  3 22:55:09.787: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Feb  3 22:55:10.005: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  3 22:55:11.566: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  3 22:55:17.424: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-548 /apis/apps/v1/namespaces/deployment-548/deployments/webserver-deployment 8754d491-6684-4b2d-b4a6-c64615e24549 6222969 3 2020-02-03 22:54:38 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00369d608  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-03 22:55:09 +0000 UTC,LastTransitionTime:2020-02-03 22:55:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-03 22:55:17 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Feb  3 22:55:19.266: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-548 /apis/apps/v1/namespaces/deployment-548/replicasets/webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 6222948 3 2020-02-03 22:55:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8754d491-6684-4b2d-b4a6-c64615e24549 0xc00369dad7 0xc00369dad8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00369db48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  3 22:55:19.266: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Feb  3 22:55:19.266: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-548 /apis/apps/v1/namespaces/deployment-548/replicasets/webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 6222947 3 2020-02-03 22:54:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8754d491-6684-4b2d-b4a6-c64615e24549 0xc00369da17 0xc00369da18}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00369da78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Feb  3 22:55:22.265: INFO: Pod "webserver-deployment-595b5b9587-2skmg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2skmg webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-2skmg 39675c2d-eab7-4833-b751-0ef7d955ecdb 6222906 0 2020-02-03 22:55:11 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4af27 0xc002a4af28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.265: INFO: Pod "webserver-deployment-595b5b9587-878bq" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-878bq webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-878bq f8dd62c2-04f5-439d-9393-f6e623665fec 6222788 0 2020-02-03 22:54:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4b037 0xc002a4b038}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-02-03 22:54:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 22:55:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a8453b4b9a9dcecaaf2a113f2f00a544b630a25f55197a0b0d955d61289dec37,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.266: INFO: Pod "webserver-deployment-595b5b9587-8gvrs" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8gvrs webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-8gvrs 62e19702-acc7-4528-ba12-628aade1bb4e 6222968 0 2020-02-03 22:55:09 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4b1b0 0xc002a4b1b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-03 22:55:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.266: INFO: Pod "webserver-deployment-595b5b9587-9x8px" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9x8px webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-9x8px 32019881-f447-4e63-a4fa-c882db07b9e5 6222929 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4b2f7 0xc002a4b2f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.267: INFO: Pod "webserver-deployment-595b5b9587-d659c" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-d659c webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-d659c aaccee28-3bdd-438d-8744-097542edb246 6222804 0 2020-02-03 22:54:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4b417 0xc002a4b418}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-03 22:54:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 22:55:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://2fdf28e9d83b441dcfd0b40ce17c760f96e472b3e652f9d1a097473443a97e5c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.267: INFO: Pod "webserver-deployment-595b5b9587-dhgsn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dhgsn webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-dhgsn a0971586-439a-4821-a05a-484e95506569 6222942 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4b580 0xc002a4b581}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.267: INFO: Pod "webserver-deployment-595b5b9587-f888q" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-f888q webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-f888q ef0f88c6-33b3-4b08-b1fe-a65f0050cd0b 6222810 0 2020-02-03 22:54:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4b687 0xc002a4b688}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-03 22:54:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 22:55:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://40be50be99434a6d9377c35108cdd74df6f32a09135ccb27f47de0b6384598de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.268: INFO: Pod "webserver-deployment-595b5b9587-fhxkq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fhxkq webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-fhxkq 72a455db-3e22-472a-8f04-2d368e02f89a 6222939 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4b7f0 0xc002a4b7f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.268: INFO: Pod "webserver-deployment-595b5b9587-jhfg6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jhfg6 webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-jhfg6 f2b8672b-783f-43e6-8b88-fda92654b5b3 6222938 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4b8f7 0xc002a4b8f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.268: INFO: Pod "webserver-deployment-595b5b9587-jzwkp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jzwkp webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-jzwkp a209816f-9113-4101-9993-8676b8ce8c45 6222943 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4ba07 0xc002a4ba08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.269: INFO: Pod "webserver-deployment-595b5b9587-kj2fq" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kj2fq webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-kj2fq c9dbe13e-3a41-4385-9550-9315332831b6 6222792 0 2020-02-03 22:54:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4bb17 0xc002a4bb18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-02-03 22:54:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 22:55:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f64552781a17d5917236ad6d2091f07380480cf5c71f4e03a706723870ae40dc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.269: INFO: Pod "webserver-deployment-595b5b9587-p4b4g" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-p4b4g webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-p4b4g a46f33ad-5fa1-493a-9be7-19f02521da5c 6222775 0 2020-02-03 22:54:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4bc90 0xc002a4bc91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-03 22:54:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 22:54:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://297cbe362d81680e2542b8a448d323a8a16a9554d154f5533ba239fb280adccb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.269: INFO: Pod "webserver-deployment-595b5b9587-q7mlk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-q7mlk webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-q7mlk ab86754e-889e-4a73-a26b-c181beb87dcc 6222927 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4be30 0xc002a4be31}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.270: INFO: Pod "webserver-deployment-595b5b9587-qwpqp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qwpqp webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-qwpqp dcf80b81-afa6-495c-bfed-62581ba345ad 6222941 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002a4bf47 0xc002a4bf48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.270: INFO: Pod "webserver-deployment-595b5b9587-rhm6x" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rhm6x webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-rhm6x e1b3b315-0cdf-479a-babf-732c232a8db7 6222930 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002ea60d7 0xc002ea60d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.270: INFO: Pod "webserver-deployment-595b5b9587-s9vtx" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-s9vtx webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-s9vtx 988975fd-6884-4e27-a097-1977bc3c2d07 6222928 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002ea6217 0xc002ea6218}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.271: INFO: Pod "webserver-deployment-595b5b9587-t9fd9" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t9fd9 webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-t9fd9 949d1f1e-5864-4666-86fa-ef645f35dd9e 6222784 0 2020-02-03 22:54:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002ea6517 0xc002ea6518}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-03 22:54:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 22:54:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://cbf331ef490b24a53423535b469530630f535a67df83182b359cc82cd6c9c49a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.271: INFO: Pod "webserver-deployment-595b5b9587-vnccz" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vnccz webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-vnccz 2561579a-b955-488e-aeb5-29bd6ff14fa8 6222780 0 2020-02-03 22:54:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002ea68b0 0xc002ea68b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-03 22:54:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 22:54:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://2500b21a50ff3c2232fcd70dfa033f9aa34cbcf47b7e5a377f420b63de153cb2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.272: INFO: Pod "webserver-deployment-595b5b9587-wj45j" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wj45j webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-wj45j 2f947214-fc1a-43c2-b367-964848f1d1d5 6222807 0 2020-02-03 22:54:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002ea6a50 0xc002ea6a51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:54:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-02-03 22:54:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-03 22:55:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://dc3499eb0902cccb60f66ccf7b08b393ddee0920b4c2422599733a95df3ebd13,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.272: INFO: Pod "webserver-deployment-595b5b9587-zzg4d" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zzg4d webserver-deployment-595b5b9587- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-595b5b9587-zzg4d a1c1fb7e-20ce-4f24-b3cb-3a8304ba302f 6222970 0 2020-02-03 22:55:11 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 98ef346c-3e65-40c4-a20d-483fbe0f37cd 0xc002ea6c90 0xc002ea6c91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-03 22:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.272: INFO: Pod "webserver-deployment-c7997dcc8-46dxk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-46dxk webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-46dxk b506248a-da73-4ac4-86c2-9a40554ea8ef 6222925 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc002ea6f67 0xc002ea6f68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.273: INFO: Pod "webserver-deployment-c7997dcc8-9469v" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9469v webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-9469v c5aa38b4-5d34-4b32-a831-a878d73841de 6222871 0 2020-02-03 22:55:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc002ea7117 0xc002ea7118}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-03 22:55:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.273: INFO: Pod "webserver-deployment-c7997dcc8-97gfq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-97gfq webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-97gfq 06ba1017-1a18-4acd-bded-cb35621cf04d 6222949 0 2020-02-03 22:55:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc002ea7357 0xc002ea7358}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-03 22:55:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.273: INFO: Pod "webserver-deployment-c7997dcc8-98pjv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-98pjv webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-98pjv 3d7627e5-4a70-4f92-9de0-36394f028166 6222852 0 2020-02-03 22:55:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc002ea7747 0xc002ea7748}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-03 22:55:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.274: INFO: Pod "webserver-deployment-c7997dcc8-brsr4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-brsr4 webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-brsr4 1b165af2-60ad-4270-bd56-80aa1a15a03e 6222924 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc002ea7977 0xc002ea7978}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.274: INFO: Pod "webserver-deployment-c7997dcc8-dkk6p" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dkk6p webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-dkk6p d5bdc874-1854-444d-b88b-ff5a8374db3a 6222848 0 2020-02-03 22:55:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc002ea7d67 0xc002ea7d68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-03 22:55:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.275: INFO: Pod "webserver-deployment-c7997dcc8-fzthn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fzthn webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-fzthn ac0ed9a0-a632-4d0f-bf9f-836a51041d3b 6222874 0 2020-02-03 22:55:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc002ea7ff7 0xc002ea7ff8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-03 22:55:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.275: INFO: Pod "webserver-deployment-c7997dcc8-gwq2l" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gwq2l webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-gwq2l 853138db-47bc-457c-94c5-5e800058059e 6222977 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc001fac177 0xc001fac178}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-03 22:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.275: INFO: Pod "webserver-deployment-c7997dcc8-k9ssr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k9ssr webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-k9ssr 2d6265c8-8e81-4bc4-8a0b-558c54d7e808 6222926 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc001fac2f7 0xc001fac2f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.276: INFO: Pod "webserver-deployment-c7997dcc8-nj8zv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nj8zv webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-nj8zv bb507c30-671d-4f7f-b7c6-741cc180324d 6222932 0 2020-02-03 22:55:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc001fac417 0xc001fac418}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.276: INFO: Pod "webserver-deployment-c7997dcc8-rp8wn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rp8wn webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-rp8wn 51c09831-941c-44ec-adaf-51f164fffb61 6222974 0 2020-02-03 22:55:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc001fac547 0xc001fac548}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-03 22:55:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.277: INFO: Pod "webserver-deployment-c7997dcc8-x6hzv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x6hzv webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-x6hzv 89cfb5a9-746c-4100-8638-d96d1b70ae5a 6222854 0 2020-02-03 22:55:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc001fac7f7 0xc001fac7f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-03 22:55:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  3 22:55:22.277: INFO: Pod "webserver-deployment-c7997dcc8-xdq2q" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xdq2q webserver-deployment-c7997dcc8- deployment-548 /api/v1/namespaces/deployment-548/pods/webserver-deployment-c7997dcc8-xdq2q 0709ee13-cd50-4ee3-b62c-be49cd11d7a0 6222950 0 2020-02-03 22:55:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 0461f367-241c-41bf-b048-dfa3bbafac53 0xc001faca47 0xc001faca48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z2ksp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z2ksp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z2ksp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-03 22:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-03 22:55:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:55:22.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-548" for this suite.

• [SLOW TEST:48.529 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":275,"skipped":4426,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:55:26.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 22:55:31.504: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 22:55:33.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:55:37.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:55:39.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:55:44.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:55:45.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:55:50.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:55:51.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:55:54.345: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:55:56.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:55:57.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:00.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:03.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:04.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:06.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:07.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:10.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:12.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:14.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:16.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:19.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:20.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:22.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:23.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:25.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:27.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:29.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:31.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 22:56:33.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716367331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 22:56:36.936: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:56:37.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-21" for this suite.
STEP: Destroying namespace "webhook-21-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:70.476 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":276,"skipped":4468,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:56:37.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  3 22:56:55.562: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 22:56:55.594: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 22:56:57.594: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 22:56:57.602: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 22:56:59.594: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 22:56:59.607: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 22:57:01.594: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 22:57:01.601: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:57:01.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8332" for this suite.

• [SLOW TEST:24.446 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4535,"failed":0}
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:57:01.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Feb  3 22:57:01.826: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6155" to be "success or failure"
Feb  3 22:57:01.922: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 96.452359ms
Feb  3 22:57:03.930: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103646416s
Feb  3 22:57:05.936: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109831176s
Feb  3 22:57:07.945: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118552424s
Feb  3 22:57:09.955: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128667407s
Feb  3 22:57:11.961: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.135131634s
Feb  3 22:57:13.972: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.146115854s
Feb  3 22:57:15.982: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.155673816s
STEP: Saw pod success
Feb  3 22:57:15.982: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  3 22:57:15.986: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  3 22:57:16.109: INFO: Waiting for pod pod-host-path-test to disappear
Feb  3 22:57:16.114: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:57:16.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6155" for this suite.

• [SLOW TEST:14.472 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4536,"failed":0}
Feb  3 22:57:16.124: INFO: Running AfterSuite actions on all nodes
Feb  3 22:57:16.124: INFO: Running AfterSuite actions on node 1
Feb  3 22:57:16.124: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0}

Ran 278 of 4814 Specs in 6515.110 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped
PASS