I0821 23:05:09.740944 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0821 23:05:09.741151 6 e2e.go:109] Starting e2e run "92598c81-d644-4ac2-836d-c37dc3b59cf3" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598051108 - Will randomize all specs Will run 278 of 4844 specs Aug 21 23:05:09.789: INFO: >>> kubeConfig: /root/.kube/config Aug 21 23:05:09.793: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 21 23:05:09.821: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 21 23:05:09.847: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 21 23:05:09.847: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 21 23:05:09.847: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 21 23:05:09.852: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 21 23:05:09.852: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 21 23:05:09.852: INFO: e2e test version: v1.17.11 Aug 21 23:05:09.853: INFO: kube-apiserver version: v1.17.5 Aug 21 23:05:09.853: INFO: >>> kubeConfig: /root/.kube/config Aug 21 23:05:09.857: INFO: Cluster IP family: ipv4 S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:05:09.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment Aug 21 23:05:10.158: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:05:10.159: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 21 23:05:10.176: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 21 23:05:15.180: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 21 23:05:15.180: INFO: Creating deployment "test-rolling-update-deployment" Aug 21 23:05:15.185: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 21 23:05:15.212: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 21 23:05:17.393: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 21 23:05:17.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733647915, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733647915, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733647915, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733647915, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 23:05:19.400: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 21 23:05:19.471: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8310 /apis/apps/v1/namespaces/deployment-8310/deployments/test-rolling-update-deployment 8b52756c-7fe7-4470-9560-8948f8abf547 2270331 1 2020-08-21 23:05:15 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00180bdb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-21 23:05:15 +0000 UTC,LastTransitionTime:2020-08-21 23:05:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-21 23:05:18 +0000 UTC,LastTransitionTime:2020-08-21 23:05:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 21 23:05:19.474: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-8310 /apis/apps/v1/namespaces/deployment-8310/replicasets/test-rolling-update-deployment-67cf4f6444 4d1ea368-9ef1-4950-86cb-843d7ad05a3d 2270320 1 2020-08-21 23:05:15 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 8b52756c-7fe7-4470-9560-8948f8abf547 0xc00179cd57 0xc00179cd58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00179cdc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 21 23:05:19.474: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 21 23:05:19.474: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8310 /apis/apps/v1/namespaces/deployment-8310/replicasets/test-rolling-update-controller 68ef3e4d-bc02-4d6f-a16a-31efb35519a7 2270329 2 2020-08-21 23:05:10 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 8b52756c-7fe7-4470-9560-8948f8abf547 0xc00179cc87 0xc00179cc88}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00179cce8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 21 23:05:19.476: INFO: Pod "test-rolling-update-deployment-67cf4f6444-kz5cx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-kz5cx test-rolling-update-deployment-67cf4f6444- deployment-8310 /api/v1/namespaces/deployment-8310/pods/test-rolling-update-deployment-67cf4f6444-kz5cx 23a7e25d-2acb-4059-80da-0dc4f82d6527 2270319 0 2020-08-21 23:05:15 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 4d1ea368-9ef1-4950-86cb-843d7ad05a3d 0xc00179d217 0xc00179d218}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7fphh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7fphh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7fphh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 23:05:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 23:05:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 23:05:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 23:05:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.92,StartTime:2020-08-21 23:05:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 23:05:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://21da28af23e517b1b948d9f62d5ad3285b07fbe996b4701fd580dfe12682040e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:05:19.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8310" for this suite. • [SLOW TEST:9.625 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":1,"skipped":1,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:05:19.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:05:36.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2479" for this suite. • [SLOW TEST:17.310 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":2,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:05:36.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Aug 21 23:05:36.863: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:05:36.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7113" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":3,"skipped":22,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:05:36.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3117 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Aug 21 23:05:37.114: INFO: Found 0 stateful pods, waiting for 3 Aug 21 23:05:47.121: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:05:47.121: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:05:47.121: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Aug 21 23:05:57.145: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:05:57.145: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:05:57.145: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:05:57.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3117 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:05:59.996: INFO: stderr: "I0821 23:05:59.882429 48 log.go:172] (0xc0009aebb0) (0xc000677e00) Create stream\nI0821 23:05:59.882492 48 log.go:172] (0xc0009aebb0) (0xc000677e00) Stream added, broadcasting: 1\nI0821 23:05:59.885157 48 log.go:172] (0xc0009aebb0) Reply frame received for 1\nI0821 23:05:59.885210 48 log.go:172] (0xc0009aebb0) (0xc000677ea0) Create stream\nI0821 23:05:59.885226 48 log.go:172] (0xc0009aebb0) (0xc000677ea0) Stream added, broadcasting: 3\nI0821 23:05:59.886083 48 log.go:172] (0xc0009aebb0) Reply frame received for 3\nI0821 23:05:59.886130 48 log.go:172] (0xc0009aebb0) (0xc0004da000) Create stream\nI0821 23:05:59.886154 48 log.go:172] (0xc0009aebb0) (0xc0004da000) Stream added, broadcasting: 5\nI0821 23:05:59.886951 48 log.go:172] (0xc0009aebb0) Reply frame received for 5\nI0821 23:05:59.951293 48 log.go:172] (0xc0009aebb0) Data frame received for 5\nI0821 23:05:59.951336 48 log.go:172] (0xc0004da000) (5) Data frame handling\nI0821 23:05:59.951364 48 log.go:172] (0xc0004da000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:05:59.981769 48 log.go:172] (0xc0009aebb0) Data frame received for 3\nI0821 23:05:59.981919 48 log.go:172] (0xc000677ea0) (3) Data frame handling\nI0821 23:05:59.982040 48 log.go:172] (0xc000677ea0) (3) Data frame sent\nI0821 23:05:59.982074 48 log.go:172] (0xc0009aebb0) Data frame received for 3\nI0821 23:05:59.982097 48 log.go:172] (0xc000677ea0) (3) Data frame handling\nI0821 23:05:59.982125 48 log.go:172] (0xc0009aebb0) Data frame received for 5\nI0821 23:05:59.982144 48 log.go:172] (0xc0004da000) (5) Data frame handling\nI0821 23:05:59.984288 48 log.go:172] (0xc0009aebb0) Data frame received for 1\nI0821 23:05:59.984319 48 log.go:172] (0xc000677e00) (1) Data frame handling\nI0821 23:05:59.984338 48 log.go:172] (0xc000677e00) (1) Data frame sent\nI0821 23:05:59.984355 48 log.go:172] (0xc0009aebb0) (0xc000677e00) Stream removed, broadcasting: 1\nI0821 23:05:59.984499 48 log.go:172] (0xc0009aebb0) Go away received\nI0821 23:05:59.984967 48 log.go:172] (0xc0009aebb0) (0xc000677e00) Stream removed, broadcasting: 1\nI0821 23:05:59.984991 48 log.go:172] (0xc0009aebb0) (0xc000677ea0) Stream removed, broadcasting: 3\nI0821 23:05:59.985008 48 log.go:172] (0xc0009aebb0) (0xc0004da000) Stream removed, broadcasting: 5\n" Aug 21 23:05:59.996: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:05:59.996: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 21 23:06:10.024: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 21 23:06:20.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3117 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:06:20.268: INFO: stderr: "I0821 23:06:20.190277 77 log.go:172] (0xc000840b00) (0xc000792140) Create stream\nI0821 23:06:20.190358 77 log.go:172] (0xc000840b00) (0xc000792140) Stream added, broadcasting: 1\nI0821 23:06:20.193378 77 log.go:172] (0xc000840b00) Reply frame received for 1\nI0821 23:06:20.193418 77 log.go:172] (0xc000840b00) (0xc0001b4000) Create stream\nI0821 23:06:20.193427 77 log.go:172] (0xc000840b00) (0xc0001b4000) Stream added, broadcasting: 3\nI0821 23:06:20.194472 77 log.go:172] (0xc000840b00) Reply frame received for 3\nI0821 23:06:20.194525 77 log.go:172] (0xc000840b00) (0xc0006bda40) Create stream\nI0821 23:06:20.194552 77 log.go:172] (0xc000840b00) (0xc0006bda40) Stream added, broadcasting: 5\nI0821 23:06:20.195488 77 log.go:172] (0xc000840b00) Reply frame received for 5\nI0821 23:06:20.258885 77 log.go:172] (0xc000840b00) Data frame received for 5\nI0821 23:06:20.258943 77 log.go:172] (0xc0006bda40) (5) Data frame handling\nI0821 23:06:20.258958 77 log.go:172] (0xc0006bda40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 23:06:20.258988 77 log.go:172] (0xc000840b00) Data frame received for 3\nI0821 23:06:20.259028 77 log.go:172] (0xc0001b4000) (3) Data frame handling\nI0821 23:06:20.259049 77 log.go:172] (0xc0001b4000) (3) Data frame sent\nI0821 23:06:20.259064 77 log.go:172] (0xc000840b00) Data frame received for 3\nI0821 23:06:20.259079 77 log.go:172] (0xc0001b4000) (3) Data frame handling\nI0821 23:06:20.259101 77 log.go:172] (0xc000840b00) Data frame received for 5\nI0821 23:06:20.259127 77 log.go:172] (0xc0006bda40) (5) Data frame handling\nI0821 23:06:20.260496 77 log.go:172] (0xc000840b00) Data frame received for 1\nI0821 23:06:20.260515 77 log.go:172] (0xc000792140) (1) Data frame handling\nI0821 23:06:20.260539 77 log.go:172] (0xc000792140) (1) Data frame sent\nI0821 23:06:20.260555 77 log.go:172] (0xc000840b00) (0xc000792140) Stream removed, broadcasting: 1\nI0821 23:06:20.260681 77 log.go:172] (0xc000840b00) Go away received\nI0821 23:06:20.260928 77 log.go:172] (0xc000840b00) (0xc000792140) Stream removed, broadcasting: 1\nI0821 23:06:20.260944 77 log.go:172] (0xc000840b00) (0xc0001b4000) Stream removed, broadcasting: 3\nI0821 23:06:20.260950 77 log.go:172] (0xc000840b00) (0xc0006bda40) Stream removed, broadcasting: 5\n" Aug 21 23:06:20.268: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 23:06:20.268: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 23:06:50.288: INFO: Waiting for StatefulSet statefulset-3117/ss2 to complete update Aug 21 23:06:50.288: INFO: Waiting for Pod statefulset-3117/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Aug 21 23:07:00.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3117 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:07:00.572: INFO: stderr: "I0821 23:07:00.451407 99 log.go:172] (0xc0005d2840) (0xc000556000) Create stream\nI0821 23:07:00.451479 99 log.go:172] (0xc0005d2840) (0xc000556000) Stream added, broadcasting: 1\nI0821 23:07:00.453703 99 log.go:172] (0xc0005d2840) Reply frame received for 1\nI0821 23:07:00.453745 99 log.go:172] (0xc0005d2840) (0xc000683a40) Create stream\nI0821 23:07:00.453757 99 log.go:172] (0xc0005d2840) (0xc000683a40) Stream added, broadcasting: 3\nI0821 23:07:00.454495 99 log.go:172] (0xc0005d2840) Reply frame received for 3\nI0821 23:07:00.454554 99 log.go:172] (0xc0005d2840) (0xc0002d8000) Create stream\nI0821 23:07:00.454571 99 log.go:172] (0xc0005d2840) (0xc0002d8000) Stream added, broadcasting: 5\nI0821 23:07:00.455343 99 log.go:172] (0xc0005d2840) Reply frame received for 5\nI0821 23:07:00.505145 99 log.go:172] (0xc0005d2840) Data frame received for 5\nI0821 23:07:00.505171 99 log.go:172] (0xc0002d8000) (5) Data frame handling\nI0821 23:07:00.505189 99 log.go:172] (0xc0002d8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:07:00.558704 99 log.go:172] (0xc0005d2840) Data frame received for 3\nI0821 23:07:00.558757 99 log.go:172] (0xc000683a40) (3) Data frame handling\nI0821 23:07:00.558788 99 log.go:172] (0xc000683a40) (3) Data frame sent\nI0821 23:07:00.558816 99 log.go:172] (0xc0005d2840) Data frame received for 3\nI0821 23:07:00.558848 99 log.go:172] (0xc000683a40) (3) Data frame handling\nI0821 23:07:00.559252 99 log.go:172] (0xc0005d2840) Data frame received for 5\nI0821 23:07:00.559287 99 log.go:172] (0xc0002d8000) (5) Data frame handling\nI0821 23:07:00.563355 99 log.go:172] (0xc0005d2840) Data frame received for 1\nI0821 23:07:00.563467 99 log.go:172] (0xc000556000) (1) Data frame handling\nI0821 23:07:00.563516 99 log.go:172] (0xc000556000) (1) Data frame sent\nI0821 23:07:00.563538 99 log.go:172] (0xc0005d2840) (0xc000556000) Stream removed, broadcasting: 1\nI0821 23:07:00.563560 99 log.go:172] (0xc0005d2840) Go away received\nI0821 23:07:00.564025 99 log.go:172] (0xc0005d2840) (0xc000556000) Stream removed, broadcasting: 1\nI0821 23:07:00.564046 99 log.go:172] (0xc0005d2840) (0xc000683a40) Stream removed, broadcasting: 3\nI0821 23:07:00.564055 99 log.go:172] (0xc0005d2840) (0xc0002d8000) Stream removed, broadcasting: 5\n" Aug 21 23:07:00.572: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:07:00.572: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 23:07:10.601: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 21 23:07:20.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3117 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:07:20.857: INFO: stderr: "I0821 23:07:20.770362 120 log.go:172] (0xc0000f6f20) (0xc0009d2140) Create stream\nI0821 23:07:20.770422 120 log.go:172] (0xc0000f6f20) (0xc0009d2140) Stream added, broadcasting: 1\nI0821 23:07:20.772664 120 log.go:172] (0xc0000f6f20) Reply frame received for 1\nI0821 23:07:20.772714 120 log.go:172] (0xc0000f6f20) (0xc000611a40) Create stream\nI0821 23:07:20.772853 120 log.go:172] (0xc0000f6f20) (0xc000611a40) Stream added, broadcasting: 3\nI0821 23:07:20.773873 120 log.go:172] (0xc0000f6f20) Reply frame received for 3\nI0821 23:07:20.773922 120 log.go:172] (0xc0000f6f20) (0xc0009d21e0) Create stream\nI0821 23:07:20.773936 120 log.go:172] (0xc0000f6f20) (0xc0009d21e0) Stream added, broadcasting: 5\nI0821 23:07:20.774890 120 log.go:172] (0xc0000f6f20) Reply frame received for 5\nI0821 23:07:20.847081 120 log.go:172] (0xc0000f6f20) Data frame received for 5\nI0821 23:07:20.847131 120 log.go:172] (0xc0009d21e0) (5) Data frame handling\nI0821 23:07:20.847151 120 log.go:172] (0xc0009d21e0) (5) Data frame sent\nI0821 23:07:20.847166 120 log.go:172] (0xc0000f6f20) Data frame received for 5\nI0821 23:07:20.847179 120 log.go:172] (0xc0009d21e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 23:07:20.847222 120 log.go:172] (0xc0000f6f20) Data frame received for 3\nI0821 23:07:20.847267 120 log.go:172] (0xc000611a40) (3) Data frame handling\nI0821 23:07:20.847288 120 log.go:172] (0xc000611a40) (3) Data frame sent\nI0821 23:07:20.847301 120 log.go:172] (0xc0000f6f20) Data frame received for 3\nI0821 23:07:20.847323 120 log.go:172] (0xc000611a40) (3) Data frame handling\nI0821 23:07:20.848620 120 log.go:172] (0xc0000f6f20) Data frame received for 1\nI0821 23:07:20.848649 120 log.go:172] (0xc0009d2140) (1) Data frame handling\nI0821 23:07:20.848674 120 log.go:172] (0xc0009d2140) (1) Data frame sent\nI0821 23:07:20.848705 120 log.go:172] (0xc0000f6f20) (0xc0009d2140) Stream removed, broadcasting: 1\nI0821 23:07:20.848822 120 log.go:172] (0xc0000f6f20) Go away received\nI0821 23:07:20.849285 120 log.go:172] (0xc0000f6f20) (0xc0009d2140) Stream removed, broadcasting: 1\nI0821 23:07:20.849306 120 log.go:172] (0xc0000f6f20) (0xc000611a40) Stream removed, broadcasting: 3\nI0821 23:07:20.849317 120 log.go:172] (0xc0000f6f20) (0xc0009d21e0) Stream removed, broadcasting: 5\n" Aug 21 23:07:20.857: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 23:07:20.857: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 23:07:50.874: INFO: Waiting for StatefulSet statefulset-3117/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 21 23:08:00.897: INFO: Deleting all statefulset in ns statefulset-3117 Aug 21 23:08:00.900: INFO: Scaling statefulset ss2 to 0 Aug 21 23:08:30.915: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 23:08:30.918: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:08:30.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3117" for this suite. • [SLOW TEST:173.968 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":4,"skipped":27,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:08:30.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:08:31.010: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-4ee4980b-ba31-45f1-a39f-5948396228bb" in namespace "security-context-test-1822" to be "success or failure" Aug 21 23:08:31.026: INFO: Pod "busybox-privileged-false-4ee4980b-ba31-45f1-a39f-5948396228bb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.064437ms Aug 21 23:08:33.295: INFO: Pod "busybox-privileged-false-4ee4980b-ba31-45f1-a39f-5948396228bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28572415s Aug 21 23:08:35.299: INFO: Pod "busybox-privileged-false-4ee4980b-ba31-45f1-a39f-5948396228bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289551608s Aug 21 23:08:37.306: INFO: Pod "busybox-privileged-false-4ee4980b-ba31-45f1-a39f-5948396228bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.296594997s Aug 21 23:08:37.306: INFO: Pod "busybox-privileged-false-4ee4980b-ba31-45f1-a39f-5948396228bb" satisfied condition "success or failure" Aug 21 23:08:37.323: INFO: Got logs for pod "busybox-privileged-false-4ee4980b-ba31-45f1-a39f-5948396228bb": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:08:37.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1822" for this suite. • [SLOW TEST:6.388 seconds] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":31,"failed":0} SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:08:37.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:08:37.406: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5815 I0821 23:08:37.420445 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5815, replica count: 1 I0821 23:08:38.470870 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 23:08:39.471121 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 23:08:40.471352 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 23:08:41.471568 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 23:08:42.471781 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 21 23:08:42.627: INFO: Created: latency-svc-65ffx Aug 21 23:08:42.646: INFO: Got endpoints: latency-svc-65ffx [75.009542ms] Aug 21 23:08:42.739: INFO: Created: latency-svc-sq6x7 Aug 21 23:08:42.764: INFO: Created: latency-svc-g5prl Aug 21 23:08:42.764: INFO: Got endpoints: latency-svc-sq6x7 [116.866363ms] Aug 21 23:08:42.776: INFO: Got endpoints: latency-svc-g5prl [128.969483ms] Aug 21 23:08:42.800: INFO: Created: latency-svc-nrzq7 Aug 21 23:08:42.813: INFO: Got endpoints: latency-svc-nrzq7 [165.760066ms] Aug 21 23:08:42.830: INFO: Created: latency-svc-bcbmn Aug 21 23:08:42.865: INFO: Got endpoints: latency-svc-bcbmn [218.062451ms] Aug 21 23:08:42.878: INFO: Created: latency-svc-zdxh2 Aug 21 23:08:42.891: INFO: Got endpoints: latency-svc-zdxh2 [244.2379ms] Aug 21 23:08:42.909: INFO: Created: latency-svc-qknz4 Aug 21 23:08:42.921: INFO: Got endpoints: latency-svc-qknz4 [274.016342ms] Aug 21 23:08:42.939: INFO: Created: latency-svc-qtr5s Aug 21 23:08:42.952: INFO: Got endpoints: latency-svc-qtr5s [305.05798ms] Aug 21 23:08:43.002: INFO: Created: latency-svc-f6n7n Aug 21 23:08:43.028: INFO: Got endpoints: latency-svc-f6n7n [380.798828ms] Aug 21 23:08:43.028: INFO: Created: latency-svc-7vn6k Aug 21 23:08:43.042: INFO: Got endpoints: latency-svc-7vn6k [394.997728ms] Aug 21 23:08:43.076: INFO: Created: latency-svc-kmpd6 Aug 21 23:08:43.090: INFO: Got endpoints: latency-svc-kmpd6 [443.510837ms] Aug 21 23:08:43.158: INFO: Created: latency-svc-zdm4n Aug 21 23:08:43.185: INFO: Got endpoints: latency-svc-zdm4n [537.86582ms] Aug 21 23:08:43.185: INFO: Created: latency-svc-kbldg Aug 21 23:08:43.209: INFO: Got endpoints: latency-svc-kbldg [562.443266ms] Aug 21 23:08:43.232: INFO: Created: latency-svc-khqsx Aug 21 23:08:43.295: INFO: Got endpoints: latency-svc-khqsx [648.328727ms] Aug 21 23:08:43.310: INFO: Created: latency-svc-bmvqj Aug 21 23:08:43.325: INFO: Got endpoints: latency-svc-bmvqj [677.873322ms] Aug 21 23:08:43.352: INFO: Created: latency-svc-cqdl8 Aug 21 23:08:43.373: INFO: Got endpoints: latency-svc-cqdl8 [726.461886ms] Aug 21 23:08:43.465: INFO: Created: latency-svc-kjspb Aug 21 23:08:43.466: INFO: Got endpoints: latency-svc-kjspb [702.143579ms] Aug 21 23:08:43.491: INFO: Created: latency-svc-fkdtv Aug 21 23:08:43.505: INFO: Got endpoints: latency-svc-fkdtv [729.67889ms] Aug 21 23:08:43.525: INFO: Created: latency-svc-jwmbr Aug 21 23:08:43.536: INFO: Got endpoints: latency-svc-jwmbr [723.411592ms] Aug 21 23:08:43.555: INFO: Created: latency-svc-x777r Aug 21 23:08:43.613: INFO: Got endpoints: latency-svc-x777r [747.978194ms] Aug 21 23:08:43.634: INFO: Created: latency-svc-f2m8t Aug 21 23:08:43.650: INFO: Got endpoints: latency-svc-f2m8t [759.338003ms] Aug 21 23:08:43.699: INFO: Created: latency-svc-zqhsm Aug 21 23:08:43.829: INFO: Got endpoints: latency-svc-zqhsm [908.100187ms] Aug 21 23:08:44.128: INFO: Created: latency-svc-4m7tm Aug 21 23:08:44.314: INFO: Got endpoints: latency-svc-4m7tm [1.36259244s] Aug 21 23:08:44.319: INFO: Created: latency-svc-skjmx Aug 21 23:08:44.383: INFO: Got endpoints: latency-svc-skjmx [1.354939707s] Aug 21 23:08:44.519: INFO: Created: latency-svc-fcjl2 Aug 21 23:08:44.570: INFO: Got endpoints: latency-svc-fcjl2 [1.528656175s] Aug 21 23:08:44.783: INFO: Created: latency-svc-wgdrz Aug 21 23:08:44.821: INFO: Got endpoints: latency-svc-wgdrz [1.730248854s] Aug 21 23:08:44.848: INFO: Created: latency-svc-bck9h Aug 21 23:08:44.850: INFO: Got endpoints: latency-svc-bck9h [1.665346326s] Aug 21 23:08:44.919: INFO: Created: latency-svc-25mdp Aug 21 23:08:44.944: INFO: Created: latency-svc-tndxs Aug 21 23:08:44.944: INFO: Got endpoints: latency-svc-25mdp [1.734746826s] Aug 21 23:08:44.956: INFO: Got endpoints: latency-svc-tndxs [1.660315119s] Aug 21 23:08:44.981: INFO: Created: latency-svc-pdfzv Aug 21 23:08:44.995: INFO: Got endpoints: latency-svc-pdfzv [1.670123615s] Aug 21 23:08:45.015: INFO: Created: latency-svc-bw4zc Aug 21 23:08:45.080: INFO: Got endpoints: latency-svc-bw4zc [1.706643156s] Aug 21 23:08:45.083: INFO: Created: latency-svc-wk6c9 Aug 21 23:08:45.091: INFO: Got endpoints: latency-svc-wk6c9 [1.625571197s] Aug 21 23:08:45.118: INFO: Created: latency-svc-h9vlb Aug 21 23:08:45.128: INFO: Got endpoints: latency-svc-h9vlb [1.622314911s] Aug 21 23:08:45.148: INFO: Created: latency-svc-287kp Aug 21 23:08:45.158: INFO: Got endpoints: latency-svc-287kp [1.62190529s] Aug 21 23:08:45.178: INFO: Created: latency-svc-fzb7t Aug 21 23:08:45.241: INFO: Got endpoints: latency-svc-fzb7t [1.628623107s] Aug 21 23:08:45.261: INFO: Created: latency-svc-668zj Aug 21 23:08:45.273: INFO: Got endpoints: latency-svc-668zj [1.622406587s] Aug 21 23:08:45.290: INFO: Created: latency-svc-vz82x Aug 21 23:08:45.304: INFO: Got endpoints: latency-svc-vz82x [1.474987885s] Aug 21 23:08:45.322: INFO: Created: latency-svc-9spnd Aug 21 23:08:45.397: INFO: Got endpoints: latency-svc-9spnd [1.082836993s] Aug 21 23:08:45.406: INFO: Created: latency-svc-fn4mb Aug 21 23:08:45.424: INFO: Got endpoints: latency-svc-fn4mb [1.041538557s] Aug 21 23:08:45.447: INFO: Created: latency-svc-w4bsp Aug 21 23:08:45.460: INFO: Got endpoints: latency-svc-w4bsp [889.443706ms] Aug 21 23:08:45.478: INFO: Created: latency-svc-ccpmb Aug 21 23:08:45.490: INFO: Got endpoints: latency-svc-ccpmb [669.502239ms] Aug 21 23:08:45.547: INFO: Created: latency-svc-gbnpw Aug 21 23:08:45.550: INFO: Got endpoints: latency-svc-gbnpw [699.823193ms] Aug 21 23:08:45.610: INFO: Created: latency-svc-wccpj Aug 21 23:08:45.623: INFO: Got endpoints: latency-svc-wccpj [678.581093ms] Aug 21 23:08:45.638: INFO: Created: latency-svc-896lr Aug 21 23:08:45.703: INFO: Got endpoints: latency-svc-896lr [747.145216ms] Aug 21 23:08:45.705: INFO: Created: latency-svc-mdvk5 Aug 21 23:08:45.713: INFO: Got endpoints: latency-svc-mdvk5 [718.281243ms] Aug 21 23:08:45.736: INFO: Created: latency-svc-vs4gj Aug 21 23:08:45.760: INFO: Got endpoints: latency-svc-vs4gj [679.85961ms] Aug 21 23:08:45.784: INFO: Created: latency-svc-42lbs Aug 21 23:08:45.799: INFO: Got endpoints: latency-svc-42lbs [707.068716ms] Aug 21 23:08:45.859: INFO: Created: latency-svc-txjjg Aug 21 23:08:45.865: INFO: Got endpoints: latency-svc-txjjg [737.078019ms] Aug 21 23:08:45.885: INFO: Created: latency-svc-clvh2 Aug 21 23:08:45.895: INFO: Got endpoints: latency-svc-clvh2 [737.130054ms] Aug 21 23:08:45.915: INFO: Created: latency-svc-4q59t Aug 21 23:08:45.931: INFO: Got endpoints: latency-svc-4q59t [689.7595ms] Aug 21 23:08:45.946: INFO: Created: latency-svc-n2nhq Aug 21 23:08:46.002: INFO: Got endpoints: latency-svc-n2nhq [729.360546ms] Aug 21 23:08:46.005: INFO: Created: latency-svc-52kc5 Aug 21 23:08:46.030: INFO: Got endpoints: latency-svc-52kc5 [725.900024ms] Aug 21 23:08:46.058: INFO: Created: latency-svc-b24t6 Aug 21 23:08:46.069: INFO: Got endpoints: latency-svc-b24t6 [671.647786ms] Aug 21 23:08:46.088: INFO: Created: latency-svc-9m6g7 Aug 21 23:08:46.099: INFO: Got endpoints: latency-svc-9m6g7 [675.121243ms] Aug 21 23:08:46.158: INFO: Created: latency-svc-gdk5b Aug 21 23:08:46.172: INFO: Got endpoints: latency-svc-gdk5b [711.927195ms] Aug 21 23:08:46.198: INFO: Created: latency-svc-lrdcx Aug 21 23:08:46.208: INFO: Got endpoints: latency-svc-lrdcx [717.389845ms] Aug 21 23:08:46.234: INFO: Created: latency-svc-7jr8h Aug 21 23:08:46.251: INFO: Got endpoints: latency-svc-7jr8h [700.567717ms] Aug 21 23:08:46.308: INFO: Created: latency-svc-f9pt8 Aug 21 23:08:46.317: INFO: Got endpoints: latency-svc-f9pt8 [694.367153ms] Aug 21 23:08:46.346: INFO: Created: latency-svc-l8dfx Aug 21 23:08:46.359: INFO: Got endpoints: latency-svc-l8dfx [655.870397ms] Aug 21 23:08:46.376: INFO: Created: latency-svc-9zkw7 Aug 21 23:08:46.401: INFO: Got endpoints: latency-svc-9zkw7 [688.284303ms] Aug 21 23:08:46.485: INFO: Created: latency-svc-6qpsc Aug 21 23:08:46.521: INFO: Got endpoints: latency-svc-6qpsc [760.966671ms] Aug 21 23:08:46.562: INFO: Created: latency-svc-5mrzv Aug 21 23:08:46.619: INFO: Got endpoints: latency-svc-5mrzv [820.445345ms] Aug 21 23:08:46.630: INFO: Created: latency-svc-cggb6 Aug 21 23:08:46.643: INFO: Got endpoints: latency-svc-cggb6 [778.628131ms] Aug 21 23:08:46.705: INFO: Created: latency-svc-ncgw6 Aug 21 23:08:46.714: INFO: Got endpoints: latency-svc-ncgw6 [818.8269ms] Aug 21 23:08:46.769: INFO: Created: latency-svc-5ttqg Aug 21 23:08:46.809: INFO: Got endpoints: latency-svc-5ttqg [877.754701ms] Aug 21 23:08:46.839: INFO: Created: latency-svc-mldht Aug 21 23:08:46.865: INFO: Got endpoints: latency-svc-mldht [862.730799ms] Aug 21 23:08:46.955: INFO: Created: latency-svc-fzdk4 Aug 21 23:08:46.961: INFO: Got endpoints: latency-svc-fzdk4 [930.763592ms] Aug 21 23:08:47.008: INFO: Created: latency-svc-j48vt Aug 21 23:08:47.039: INFO: Got endpoints: latency-svc-j48vt [970.031243ms] Aug 21 23:08:47.124: INFO: Created: latency-svc-wfk9x Aug 21 23:08:47.152: INFO: Got endpoints: latency-svc-wfk9x [1.052104456s] Aug 21 23:08:47.174: INFO: Created: latency-svc-vdbh7 Aug 21 23:08:47.189: INFO: Got endpoints: latency-svc-vdbh7 [1.017359898s] Aug 21 23:08:47.211: INFO: Created: latency-svc-b92m2 Aug 21 23:08:47.260: INFO: Got endpoints: latency-svc-b92m2 [1.051849179s] Aug 21 23:08:47.270: INFO: Created: latency-svc-chx9q Aug 21 23:08:47.286: INFO: Got endpoints: latency-svc-chx9q [1.035610849s] Aug 21 23:08:47.308: INFO: Created: latency-svc-wtk88 Aug 21 23:08:47.322: INFO: Got endpoints: latency-svc-wtk88 [1.004906146s] Aug 21 23:08:47.344: INFO: Created: latency-svc-dbcld Aug 21 23:08:47.415: INFO: Got endpoints: latency-svc-dbcld [1.056333678s] Aug 21 23:08:47.432: INFO: Created: latency-svc-sw5w5 Aug 21 23:08:47.456: INFO: Got endpoints: latency-svc-sw5w5 [1.054853401s] Aug 21 23:08:47.480: INFO: Created: latency-svc-5rgtf Aug 21 23:08:47.491: INFO: Got endpoints: latency-svc-5rgtf [969.97013ms] Aug 21 23:08:47.561: INFO: Created: latency-svc-fb7nl Aug 21 23:08:47.569: INFO: Got endpoints: latency-svc-fb7nl [949.802056ms] Aug 21 23:08:47.595: INFO: Created: latency-svc-j22jf Aug 21 23:08:47.605: INFO: Got endpoints: latency-svc-j22jf [961.936149ms] Aug 21 23:08:47.643: INFO: Created: latency-svc-v6bhh Aug 21 23:08:47.738: INFO: Got endpoints: latency-svc-v6bhh [1.024160687s] Aug 21 23:08:47.741: INFO: Created: latency-svc-9nt4b Aug 21 23:08:47.762: INFO: Got endpoints: latency-svc-9nt4b [953.120847ms] Aug 21 23:08:47.782: INFO: Created: latency-svc-28mgq Aug 21 23:08:47.810: INFO: Got endpoints: latency-svc-28mgq [945.354627ms] Aug 21 23:08:47.966: INFO: Created: latency-svc-4kn7d Aug 21 23:08:47.970: INFO: Got endpoints: latency-svc-4kn7d [1.00895186s] Aug 21 23:08:47.998: INFO: Created: latency-svc-fl4ww Aug 21 23:08:48.009: INFO: Got endpoints: latency-svc-fl4ww [969.416919ms] Aug 21 23:08:48.051: INFO: Created: latency-svc-th5gq Aug 21 23:08:48.063: INFO: Got endpoints: latency-svc-th5gq [911.256418ms] Aug 21 23:08:48.122: INFO: Created: latency-svc-hvqrk Aug 21 23:08:48.135: INFO: Got endpoints: latency-svc-hvqrk [945.62416ms] Aug 21 23:08:48.159: INFO: Created: latency-svc-6878r Aug 21 23:08:48.171: INFO: Got endpoints: latency-svc-6878r [911.686329ms] Aug 21 23:08:48.189: INFO: Created: latency-svc-r4h9j Aug 21 23:08:48.202: INFO: Got endpoints: latency-svc-r4h9j [915.591342ms] Aug 21 23:08:48.296: INFO: Created: latency-svc-g7djh Aug 21 23:08:48.304: INFO: Got endpoints: latency-svc-g7djh [981.592939ms] Aug 21 23:08:48.341: INFO: Created: latency-svc-q8lzc Aug 21 23:08:48.352: INFO: Got endpoints: latency-svc-q8lzc [936.985064ms] Aug 21 23:08:48.387: INFO: Created: latency-svc-29xcn Aug 21 23:08:48.463: INFO: Got endpoints: latency-svc-29xcn [1.006294357s] Aug 21 23:08:48.471: INFO: Created: latency-svc-8m7zv Aug 21 23:08:48.485: INFO: Got endpoints: latency-svc-8m7zv [993.856092ms] Aug 21 23:08:48.507: INFO: Created: latency-svc-s7g6h Aug 21 23:08:48.511: INFO: Got endpoints: latency-svc-s7g6h [941.738712ms] Aug 21 23:08:48.536: INFO: Created: latency-svc-vvpfg Aug 21 23:08:48.539: INFO: Got endpoints: latency-svc-vvpfg [933.959765ms] Aug 21 23:08:48.607: INFO: Created: latency-svc-rsfvh Aug 21 23:08:48.625: INFO: Got endpoints: latency-svc-rsfvh [886.365055ms] Aug 21 23:08:48.645: INFO: Created: latency-svc-xjbwh Aug 21 23:08:48.663: INFO: Got endpoints: latency-svc-xjbwh [901.016077ms] Aug 21 23:08:48.699: INFO: Created: latency-svc-m8h69 Aug 21 23:08:48.769: INFO: Got endpoints: latency-svc-m8h69 [958.158476ms] Aug 21 23:08:48.818: INFO: Created: latency-svc-7mdtb Aug 21 23:08:49.008: INFO: Got endpoints: latency-svc-7mdtb [1.038613671s] Aug 21 23:08:49.011: INFO: Created: latency-svc-szhwr Aug 21 23:08:49.022: INFO: Got endpoints: latency-svc-szhwr [1.013011493s] Aug 21 23:08:49.047: INFO: Created: latency-svc-95sck Aug 21 23:08:49.057: INFO: Got endpoints: latency-svc-95sck [993.972798ms] Aug 21 23:08:49.077: INFO: Created: latency-svc-hcvbf Aug 21 23:08:49.087: INFO: Got endpoints: latency-svc-hcvbf [951.969695ms] Aug 21 23:08:49.110: INFO: Created: latency-svc-h5ljh Aug 21 23:08:49.158: INFO: Got endpoints: latency-svc-h5ljh [986.314041ms] Aug 21 23:08:49.178: INFO: Created: latency-svc-szthh Aug 21 23:08:49.190: INFO: Got endpoints: latency-svc-szthh [987.931946ms] Aug 21 23:08:49.215: INFO: Created: latency-svc-9pngk Aug 21 23:08:49.226: INFO: Got endpoints: latency-svc-9pngk [921.683757ms] Aug 21 23:08:49.243: INFO: Created: latency-svc-6rst7 Aug 21 23:08:49.256: INFO: Got endpoints: latency-svc-6rst7 [903.926623ms] Aug 21 23:08:49.314: INFO: Created: latency-svc-wbzjz Aug 21 23:08:49.317: INFO: Got endpoints: latency-svc-wbzjz [853.897781ms] Aug 21 23:08:49.359: INFO: Created: latency-svc-r95t6 Aug 21 23:08:49.407: INFO: Got endpoints: latency-svc-r95t6 [922.049897ms] Aug 21 23:08:49.490: INFO: Created: latency-svc-6p2bv Aug 21 23:08:49.490: INFO: Got endpoints: latency-svc-6p2bv [979.418085ms] Aug 21 23:08:49.525: INFO: Created: latency-svc-knxp5 Aug 21 23:08:49.539: INFO: Got endpoints: latency-svc-knxp5 [999.803705ms] Aug 21 23:08:49.557: INFO: Created: latency-svc-qqzsj Aug 21 23:08:49.569: INFO: Got endpoints: latency-svc-qqzsj [944.50038ms] Aug 21 23:08:49.586: INFO: Created: latency-svc-xhxzj Aug 21 23:08:49.649: INFO: Got endpoints: latency-svc-xhxzj [985.383101ms] Aug 21 23:08:49.653: INFO: Created: latency-svc-vg26q Aug 21 23:08:49.681: INFO: Got endpoints: latency-svc-vg26q [912.847558ms] Aug 21 23:08:49.718: INFO: Created: latency-svc-86qmv Aug 21 23:08:49.732: INFO: Got endpoints: latency-svc-86qmv [723.918029ms] Aug 21 23:08:49.749: INFO: Created: latency-svc-qtqbg Aug 21 23:08:49.816: INFO: Got endpoints: latency-svc-qtqbg [794.782327ms] Aug 21 23:08:49.819: INFO: Created: latency-svc-54tgt Aug 21 23:08:49.841: INFO: Got endpoints: latency-svc-54tgt [784.381394ms] Aug 21 23:08:49.862: INFO: Created: latency-svc-c7nfd Aug 21 23:08:49.885: INFO: Got endpoints: latency-svc-c7nfd [797.935502ms] Aug 21 23:08:49.909: INFO: Created: latency-svc-8qtwj Aug 21 23:08:49.978: INFO: Got endpoints: latency-svc-8qtwj [820.325289ms] Aug 21 23:08:49.981: INFO: Created: latency-svc-t7598 Aug 21 23:08:49.999: INFO: Got endpoints: latency-svc-t7598 [808.702376ms] Aug 21 23:08:50.019: INFO: Created: latency-svc-jxxdm Aug 21 23:08:50.035: INFO: Got endpoints: latency-svc-jxxdm [808.942468ms] Aug 21 23:08:50.055: INFO: Created: latency-svc-nndbp Aug 21 23:08:50.064: INFO: Got endpoints: latency-svc-nndbp [808.274904ms] Aug 21 23:08:50.158: INFO: Created: latency-svc-x5q7m Aug 21 23:08:50.162: INFO: Got endpoints: latency-svc-x5q7m [844.754086ms] Aug 21 23:08:50.227: INFO: Created: latency-svc-l2zjv Aug 21 23:08:50.251: INFO: Got endpoints: latency-svc-l2zjv [843.998428ms] Aug 21 23:08:50.308: INFO: Created: latency-svc-56cqk Aug 21 23:08:50.317: INFO: Got endpoints: latency-svc-56cqk [826.85884ms] Aug 21 23:08:50.360: INFO: Created: latency-svc-xwlsk Aug 21 23:08:50.371: INFO: Got endpoints: latency-svc-xwlsk [831.707242ms] Aug 21 23:08:50.505: INFO: Created: latency-svc-p8pvd Aug 21 23:08:50.508: INFO: Got endpoints: latency-svc-p8pvd [938.239773ms] Aug 21 23:08:50.566: INFO: Created: latency-svc-h2x7z Aug 21 23:08:50.575: INFO: Got endpoints: latency-svc-h2x7z [926.520969ms] Aug 21 23:08:50.679: INFO: Created: latency-svc-dbt2h Aug 21 23:08:50.756: INFO: Got endpoints: latency-svc-dbt2h [1.074040892s] Aug 21 23:08:50.835: INFO: Created: latency-svc-chz49 Aug 21 23:08:50.852: INFO: Got endpoints: latency-svc-chz49 [1.119377702s] Aug 21 23:08:50.888: INFO: Created: latency-svc-vzdnk Aug 21 23:08:50.913: INFO: Got endpoints: latency-svc-vzdnk [1.096406231s] Aug 21 23:08:50.997: INFO: Created: latency-svc-bt7lj Aug 21 23:08:51.000: INFO: Got endpoints: latency-svc-bt7lj [1.158113856s] Aug 21 23:08:51.026: INFO: Created: latency-svc-4l9ct Aug 21 23:08:51.041: INFO: Got endpoints: latency-svc-4l9ct [1.156020011s] Aug 21 23:08:51.062: INFO: Created: latency-svc-ptkfx Aug 21 23:08:51.078: INFO: Got endpoints: latency-svc-ptkfx [1.099614633s] Aug 21 23:08:51.146: INFO: Created: latency-svc-44q97 Aug 21 23:08:51.149: INFO: Got endpoints: latency-svc-44q97 [1.150324569s] Aug 21 23:08:51.181: INFO: Created: latency-svc-x9csd Aug 21 23:08:51.192: INFO: Got endpoints: latency-svc-x9csd [1.157157386s] Aug 21 23:08:51.211: INFO: Created: latency-svc-jxm8b Aug 21 23:08:51.235: INFO: Got endpoints: latency-svc-jxm8b [1.170428802s] Aug 21 23:08:51.309: INFO: Created: latency-svc-jpl7g Aug 21 23:08:51.312: INFO: Got endpoints: latency-svc-jpl7g [1.150756322s] Aug 21 23:08:51.338: INFO: Created: latency-svc-j67pn Aug 21 23:08:51.355: INFO: Got endpoints: latency-svc-j67pn [1.103865069s] Aug 21 23:08:51.379: INFO: Created: latency-svc-ljpng Aug 21 23:08:51.391: INFO: Got endpoints: latency-svc-ljpng [1.07354132s] Aug 21 23:08:51.457: INFO: Created: latency-svc-n4fvn Aug 21 23:08:51.481: INFO: Got endpoints: latency-svc-n4fvn [1.109702311s] Aug 21 23:08:51.512: INFO: Created: latency-svc-gbwwq Aug 21 23:08:51.523: INFO: Got endpoints: latency-svc-gbwwq [1.01576422s] Aug 21 23:08:51.583: INFO: Created: latency-svc-h7kqv Aug 21 23:08:51.586: INFO: Got endpoints: latency-svc-h7kqv [1.010669835s] Aug 21 23:08:51.620: INFO: Created: latency-svc-xw5t2 Aug 21 23:08:51.634: INFO: Got endpoints: latency-svc-xw5t2 [878.094831ms] Aug 21 23:08:51.656: INFO: Created: latency-svc-swbpj Aug 21 23:08:51.675: INFO: Got endpoints: latency-svc-swbpj [822.858127ms] Aug 21 23:08:51.728: INFO: Created: latency-svc-6m854 Aug 21 23:08:51.752: INFO: Got endpoints: latency-svc-6m854 [839.098508ms] Aug 21 23:08:51.792: INFO: Created: latency-svc-7tgld Aug 21 23:08:51.807: INFO: Got endpoints: latency-svc-7tgld [807.637856ms] Aug 21 23:08:51.907: INFO: Created: latency-svc-7rzvh Aug 21 23:08:51.936: INFO: Got endpoints: latency-svc-7rzvh [895.390305ms] Aug 21 23:08:51.937: INFO: Created: latency-svc-lrz9j Aug 21 23:08:51.945: INFO: Got endpoints: latency-svc-lrz9j [867.506465ms] Aug 21 23:08:51.973: INFO: Created: latency-svc-pt8nb Aug 21 23:08:51.984: INFO: Got endpoints: latency-svc-pt8nb [835.026975ms] Aug 21 23:08:52.080: INFO: Created: latency-svc-hf429 Aug 21 23:08:52.084: INFO: Got endpoints: latency-svc-hf429 [891.649992ms] Aug 21 23:08:52.129: INFO: Created: latency-svc-gbcxf Aug 21 23:08:52.157: INFO: Got endpoints: latency-svc-gbcxf [922.166932ms] Aug 21 23:08:52.223: INFO: Created: latency-svc-8z2lq Aug 21 23:08:52.226: INFO: Got endpoints: latency-svc-8z2lq [913.813232ms] Aug 21 23:08:52.254: INFO: Created: latency-svc-c7jm7 Aug 21 23:08:52.271: INFO: Got endpoints: latency-svc-c7jm7 [915.97691ms] Aug 21 23:08:52.311: INFO: Created: latency-svc-jhjx2 Aug 21 23:08:52.392: INFO: Got endpoints: latency-svc-jhjx2 [1.001088622s] Aug 21 23:08:52.422: INFO: Created: latency-svc-5pvmq Aug 21 23:08:52.458: INFO: Got endpoints: latency-svc-5pvmq [977.537352ms] Aug 21 23:08:52.490: INFO: Created: latency-svc-2vl46 Aug 21 23:08:52.553: INFO: Got endpoints: latency-svc-2vl46 [1.029734086s] Aug 21 23:08:52.557: INFO: Created: latency-svc-9qktz Aug 21 23:08:52.565: INFO: Got endpoints: latency-svc-9qktz [979.461861ms] Aug 21 23:08:52.592: INFO: Created: latency-svc-nvnvd Aug 21 23:08:52.608: INFO: Got endpoints: latency-svc-nvnvd [974.269322ms] Aug 21 23:08:52.644: INFO: Created: latency-svc-hpjbh Aug 21 23:08:52.709: INFO: Got endpoints: latency-svc-hpjbh [1.033777578s] Aug 21 23:08:52.730: INFO: Created: latency-svc-bwm7r Aug 21 23:08:52.734: INFO: Got endpoints: latency-svc-bwm7r [981.885703ms] Aug 21 23:08:52.760: INFO: Created: latency-svc-jpmw6 Aug 21 23:08:52.771: INFO: Got endpoints: latency-svc-jpmw6 [963.368044ms] Aug 21 23:08:52.795: INFO: Created: latency-svc-4l7ln Aug 21 23:08:52.807: INFO: Got endpoints: latency-svc-4l7ln [870.277546ms] Aug 21 23:08:52.877: INFO: Created: latency-svc-cgh5t Aug 21 23:08:52.879: INFO: Got endpoints: latency-svc-cgh5t [934.143599ms] Aug 21 23:08:52.926: INFO: Created: latency-svc-4mkd2 Aug 21 23:08:52.940: INFO: Got endpoints: latency-svc-4mkd2 [955.406391ms] Aug 21 23:08:52.956: INFO: Created: latency-svc-ql9h7 Aug 21 23:08:52.970: INFO: Got endpoints: latency-svc-ql9h7 [886.647875ms] Aug 21 23:08:53.026: INFO: Created: latency-svc-txhkk Aug 21 23:08:53.029: INFO: Got endpoints: latency-svc-txhkk [872.377557ms] Aug 21 23:08:53.078: INFO: Created: latency-svc-98t9w Aug 21 23:08:53.097: INFO: Got endpoints: latency-svc-98t9w [871.064721ms] Aug 21 23:08:53.176: INFO: Created: latency-svc-hktz2 Aug 21 23:08:53.186: INFO: Got endpoints: latency-svc-hktz2 [915.519994ms] Aug 21 23:08:53.208: INFO: Created: latency-svc-rphzx Aug 21 23:08:53.227: INFO: Got endpoints: latency-svc-rphzx [835.093675ms] Aug 21 23:08:53.258: INFO: Created: latency-svc-rnrtr Aug 21 23:08:53.271: INFO: Got endpoints: latency-svc-rnrtr [812.240704ms] Aug 21 23:08:53.319: INFO: Created: latency-svc-j8j65 Aug 21 23:08:53.322: INFO: Got endpoints: latency-svc-j8j65 [768.708039ms] Aug 21 23:08:53.346: INFO: Created: latency-svc-m7zvm Aug 21 23:08:53.362: INFO: Got endpoints: latency-svc-m7zvm [796.429977ms] Aug 21 23:08:53.394: INFO: Created: latency-svc-vxnbz Aug 21 23:08:53.410: INFO: Got endpoints: latency-svc-vxnbz [802.018559ms] Aug 21 23:08:53.464: INFO: Created: latency-svc-wsvgl Aug 21 23:08:53.479: INFO: Got endpoints: latency-svc-wsvgl [770.769875ms] Aug 21 23:08:53.510: INFO: Created: latency-svc-sg6bt Aug 21 23:08:53.525: INFO: Got endpoints: latency-svc-sg6bt [790.47578ms] Aug 21 23:08:53.545: INFO: Created: latency-svc-jld69 Aug 21 23:08:53.561: INFO: Got endpoints: latency-svc-jld69 [790.352585ms] Aug 21 23:08:53.613: INFO: Created: latency-svc-vc45g Aug 21 23:08:53.616: INFO: Got endpoints: latency-svc-vc45g [808.94808ms] Aug 21 23:08:53.640: INFO: Created: latency-svc-zqbhr Aug 21 23:08:53.651: INFO: Got endpoints: latency-svc-zqbhr [771.485767ms] Aug 21 23:08:53.690: INFO: Created: latency-svc-z7rbb Aug 21 23:08:53.762: INFO: Got endpoints: latency-svc-z7rbb [822.772125ms] Aug 21 23:08:53.765: INFO: Created: latency-svc-xzmr9 Aug 21 23:08:53.778: INFO: Got endpoints: latency-svc-xzmr9 [807.337081ms] Aug 21 23:08:53.796: INFO: Created: latency-svc-x82mn Aug 21 23:08:53.808: INFO: Got endpoints: latency-svc-x82mn [778.280448ms] Aug 21 23:08:53.826: INFO: Created: latency-svc-fx2cl Aug 21 23:08:53.839: INFO: Got endpoints: latency-svc-fx2cl [741.405695ms] Aug 21 23:08:53.858: INFO: Created: latency-svc-8q9fb Aug 21 23:08:53.912: INFO: Got endpoints: latency-svc-8q9fb [725.57938ms] Aug 21 23:08:53.917: INFO: Created: latency-svc-vw4rp Aug 21 23:08:53.935: INFO: Got endpoints: latency-svc-vw4rp [708.252921ms] Aug 21 23:08:53.960: INFO: Created: latency-svc-rxm79 Aug 21 23:08:53.971: INFO: Got endpoints: latency-svc-rxm79 [700.32253ms] Aug 21 23:08:53.988: INFO: Created: latency-svc-vjhkq Aug 21 23:08:54.002: INFO: Got endpoints: latency-svc-vjhkq [680.174245ms] Aug 21 23:08:54.050: INFO: Created: latency-svc-prhn4 Aug 21 23:08:54.084: INFO: Got endpoints: latency-svc-prhn4 [721.809206ms] Aug 21 23:08:54.085: INFO: Created: latency-svc-vns5w Aug 21 23:08:54.109: INFO: Got endpoints: latency-svc-vns5w [699.131085ms] Aug 21 23:08:54.188: INFO: Created: latency-svc-q6wjb Aug 21 23:08:54.191: INFO: Got endpoints: latency-svc-q6wjb [711.12467ms] Aug 21 23:08:54.211: INFO: Created: latency-svc-tds64 Aug 21 23:08:54.225: INFO: Got endpoints: latency-svc-tds64 [700.189567ms] Aug 21 23:08:54.246: INFO: Created: latency-svc-pcq82 Aug 21 23:08:54.261: INFO: Got endpoints: latency-svc-pcq82 [699.818459ms] Aug 21 23:08:54.344: INFO: Created: latency-svc-vpwhv Aug 21 23:08:54.363: INFO: Got endpoints: latency-svc-vpwhv [747.244101ms] Aug 21 23:08:54.386: INFO: Created: latency-svc-xm56x Aug 21 23:08:54.499: INFO: Got endpoints: latency-svc-xm56x [848.087187ms] Aug 21 23:08:54.501: INFO: Created: latency-svc-qkhvx Aug 21 23:08:54.538: INFO: Got endpoints: latency-svc-qkhvx [775.349047ms] Aug 21 23:08:54.564: INFO: Created: latency-svc-7bktd Aug 21 23:08:54.590: INFO: Got endpoints: latency-svc-7bktd [811.84198ms] Aug 21 23:08:54.637: INFO: Created: latency-svc-twm9h Aug 21 23:08:54.684: INFO: Got endpoints: latency-svc-twm9h [876.491112ms] Aug 21 23:08:54.685: INFO: Created: latency-svc-48bkj Aug 21 23:08:54.706: INFO: Got endpoints: latency-svc-48bkj [867.595768ms] Aug 21 23:08:54.732: INFO: Created: latency-svc-gs2dm Aug 21 23:08:54.787: INFO: Got endpoints: latency-svc-gs2dm [874.890398ms] Aug 21 23:08:54.805: INFO: Created: latency-svc-cvhrd Aug 21 23:08:54.833: INFO: Got endpoints: latency-svc-cvhrd [897.685142ms] Aug 21 23:08:54.859: INFO: Created: latency-svc-cjkph Aug 21 23:08:54.875: INFO: Got endpoints: latency-svc-cjkph [904.208643ms] Aug 21 23:08:54.949: INFO: Created: latency-svc-p9tvc Aug 21 23:08:54.951: INFO: Got endpoints: latency-svc-p9tvc [948.712388ms] Aug 21 23:08:54.978: INFO: Created: latency-svc-xzzwf Aug 21 23:08:54.989: INFO: Got endpoints: latency-svc-xzzwf [905.634788ms] Aug 21 23:08:55.008: INFO: Created: latency-svc-jx7zv Aug 21 23:08:55.033: INFO: Got endpoints: latency-svc-jx7zv [923.317724ms] Aug 21 23:08:55.033: INFO: Latencies: [116.866363ms 128.969483ms 165.760066ms 218.062451ms 244.2379ms 274.016342ms 305.05798ms 380.798828ms 394.997728ms 443.510837ms 537.86582ms 562.443266ms 648.328727ms 655.870397ms 669.502239ms 671.647786ms 675.121243ms 677.873322ms 678.581093ms 679.85961ms 680.174245ms 688.284303ms 689.7595ms 694.367153ms 699.131085ms 699.818459ms 699.823193ms 700.189567ms 700.32253ms 700.567717ms 702.143579ms 707.068716ms 708.252921ms 711.12467ms 711.927195ms 717.389845ms 718.281243ms 721.809206ms 723.411592ms 723.918029ms 725.57938ms 725.900024ms 726.461886ms 729.360546ms 729.67889ms 737.078019ms 737.130054ms 741.405695ms 747.145216ms 747.244101ms 747.978194ms 759.338003ms 760.966671ms 768.708039ms 770.769875ms 771.485767ms 775.349047ms 778.280448ms 778.628131ms 784.381394ms 790.352585ms 790.47578ms 794.782327ms 796.429977ms 797.935502ms 802.018559ms 807.337081ms 807.637856ms 808.274904ms 808.702376ms 808.942468ms 808.94808ms 811.84198ms 812.240704ms 818.8269ms 820.325289ms 820.445345ms 822.772125ms 822.858127ms 826.85884ms 831.707242ms 835.026975ms 835.093675ms 839.098508ms 843.998428ms 844.754086ms 848.087187ms 853.897781ms 862.730799ms 867.506465ms 867.595768ms 870.277546ms 871.064721ms 872.377557ms 874.890398ms 876.491112ms 877.754701ms 878.094831ms 886.365055ms 886.647875ms 889.443706ms 891.649992ms 895.390305ms 897.685142ms 901.016077ms 903.926623ms 904.208643ms 905.634788ms 908.100187ms 911.256418ms 911.686329ms 912.847558ms 913.813232ms 915.519994ms 915.591342ms 915.97691ms 921.683757ms 922.049897ms 922.166932ms 923.317724ms 926.520969ms 930.763592ms 933.959765ms 934.143599ms 936.985064ms 938.239773ms 941.738712ms 944.50038ms 945.354627ms 945.62416ms 948.712388ms 949.802056ms 951.969695ms 953.120847ms 955.406391ms 958.158476ms 961.936149ms 963.368044ms 969.416919ms 969.97013ms 970.031243ms 974.269322ms 977.537352ms 979.418085ms 979.461861ms 981.592939ms 981.885703ms 985.383101ms 986.314041ms 987.931946ms 993.856092ms 993.972798ms 999.803705ms 1.001088622s 1.004906146s 1.006294357s 1.00895186s 1.010669835s 1.013011493s 1.01576422s 1.017359898s 1.024160687s 1.029734086s 1.033777578s 1.035610849s 1.038613671s 1.041538557s 1.051849179s 1.052104456s 1.054853401s 1.056333678s 1.07354132s 1.074040892s 1.082836993s 1.096406231s 1.099614633s 1.103865069s 1.109702311s 1.119377702s 1.150324569s 1.150756322s 1.156020011s 1.157157386s 1.158113856s 1.170428802s 1.354939707s 1.36259244s 1.474987885s 1.528656175s 1.62190529s 1.622314911s 1.622406587s 1.625571197s 1.628623107s 1.660315119s 1.665346326s 1.670123615s 1.706643156s 1.730248854s 1.734746826s] Aug 21 23:08:55.033: INFO: 50 %ile: 889.443706ms Aug 21 23:08:55.033: INFO: 90 %ile: 1.150756322s Aug 21 23:08:55.033: INFO: 99 %ile: 1.730248854s Aug 21 23:08:55.033: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:08:55.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5815" for this suite. • [SLOW TEST:17.751 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":6,"skipped":36,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:08:55.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:08:55.124: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 21 23:08:57.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6627 create -f -' Aug 21 23:09:01.194: INFO: stderr: "" Aug 21 23:09:01.194: INFO: stdout: "e2e-test-crd-publish-openapi-2790-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 21 23:09:01.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6627 delete e2e-test-crd-publish-openapi-2790-crds test-cr' Aug 21 23:09:01.302: INFO: stderr: "" Aug 21 23:09:01.302: INFO: stdout: "e2e-test-crd-publish-openapi-2790-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 21 23:09:01.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6627 apply -f -' Aug 21 23:09:01.595: INFO: stderr: "" Aug 21 23:09:01.595: INFO: stdout: "e2e-test-crd-publish-openapi-2790-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 21 23:09:01.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6627 delete e2e-test-crd-publish-openapi-2790-crds test-cr' Aug 21 23:09:01.719: INFO: stderr: "" Aug 21 23:09:01.719: INFO: stdout: "e2e-test-crd-publish-openapi-2790-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 21 23:09:01.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2790-crds' Aug 21 23:09:02.027: INFO: stderr: "" Aug 21 23:09:02.027: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2790-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:09:04.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6627" for this suite. • [SLOW TEST:9.899 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":7,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:09:04.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 21 23:09:06.306: INFO: Pod name wrapped-volume-race-999d4b59-1d53-409a-a36c-7d602e849312: Found 0 pods out of 5 Aug 21 23:09:11.358: INFO: Pod name wrapped-volume-race-999d4b59-1d53-409a-a36c-7d602e849312: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-999d4b59-1d53-409a-a36c-7d602e849312 in namespace emptydir-wrapper-8477, will wait for the garbage collector to delete the pods Aug 21 23:09:27.757: INFO: Deleting ReplicationController wrapped-volume-race-999d4b59-1d53-409a-a36c-7d602e849312 took: 226.266378ms Aug 21 23:09:28.157: INFO: Terminating ReplicationController wrapped-volume-race-999d4b59-1d53-409a-a36c-7d602e849312 pods took: 400.303597ms STEP: Creating RC which spawns configmap-volume pods Aug 21 23:09:41.786: INFO: Pod name wrapped-volume-race-05cad537-1d62-4709-b19e-5e7cbe94e4a4: Found 0 pods out of 5 Aug 21 23:09:46.793: INFO: Pod name wrapped-volume-race-05cad537-1d62-4709-b19e-5e7cbe94e4a4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-05cad537-1d62-4709-b19e-5e7cbe94e4a4 in namespace emptydir-wrapper-8477, will wait for the garbage collector to delete the pods Aug 21 23:10:00.935: INFO: Deleting ReplicationController wrapped-volume-race-05cad537-1d62-4709-b19e-5e7cbe94e4a4 took: 6.488834ms Aug 21 23:10:01.236: INFO: Terminating ReplicationController wrapped-volume-race-05cad537-1d62-4709-b19e-5e7cbe94e4a4 pods took: 300.291106ms STEP: Creating RC which spawns configmap-volume pods Aug 21 23:10:12.671: INFO: Pod name wrapped-volume-race-e15d7ec7-43d1-4454-972c-103a21db5a15: Found 0 pods out of 5 Aug 21 23:10:17.694: INFO: Pod name wrapped-volume-race-e15d7ec7-43d1-4454-972c-103a21db5a15: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e15d7ec7-43d1-4454-972c-103a21db5a15 in namespace emptydir-wrapper-8477, will wait for the garbage collector to delete the pods Aug 21 23:10:33.788: INFO: Deleting ReplicationController wrapped-volume-race-e15d7ec7-43d1-4454-972c-103a21db5a15 took: 6.851055ms Aug 21 23:10:34.088: INFO: Terminating ReplicationController wrapped-volume-race-e15d7ec7-43d1-4454-972c-103a21db5a15 pods took: 300.240365ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:10:42.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8477" for this suite. • [SLOW TEST:97.440 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":8,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:10:42.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 23:10:43.070: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 23:10:45.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648243, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648243, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648243, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648243, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 23:10:47.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648243, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648243, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648243, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648243, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 23:10:50.933: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 21 23:10:50.953: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:10:50.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2893" for this suite. STEP: Destroying namespace "webhook-2893-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.685 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":9,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:10:51.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 21 23:10:51.189: INFO: Waiting up to 5m0s for pod "pod-94ca7523-d3c1-464b-81d2-e5dc524d6888" in namespace "emptydir-6618" to be "success or failure" Aug 21 23:10:51.195: INFO: Pod "pod-94ca7523-d3c1-464b-81d2-e5dc524d6888": Phase="Pending", Reason="", readiness=false. Elapsed: 5.710442ms Aug 21 23:10:53.199: INFO: Pod "pod-94ca7523-d3c1-464b-81d2-e5dc524d6888": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009835937s Aug 21 23:10:55.303: INFO: Pod "pod-94ca7523-d3c1-464b-81d2-e5dc524d6888": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113543052s STEP: Saw pod success Aug 21 23:10:55.303: INFO: Pod "pod-94ca7523-d3c1-464b-81d2-e5dc524d6888" satisfied condition "success or failure" Aug 21 23:10:55.306: INFO: Trying to get logs from node jerma-worker2 pod pod-94ca7523-d3c1-464b-81d2-e5dc524d6888 container test-container: STEP: delete the pod Aug 21 23:10:55.390: INFO: Waiting for pod pod-94ca7523-d3c1-464b-81d2-e5dc524d6888 to disappear Aug 21 23:10:55.482: INFO: Pod pod-94ca7523-d3c1-464b-81d2-e5dc524d6888 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:10:55.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6618" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":98,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:10:55.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 21 23:10:55.639: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-a 774bb03a-e29d-4498-a75b-857d959d6e81 2274592 0 2020-08-21 23:10:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 21 23:10:55.639: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-a 774bb03a-e29d-4498-a75b-857d959d6e81 2274592 0 2020-08-21 23:10:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 21 23:11:05.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-a 774bb03a-e29d-4498-a75b-857d959d6e81 2274677 0 2020-08-21 23:10:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 21 23:11:05.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-a 774bb03a-e29d-4498-a75b-857d959d6e81 2274677 0 2020-08-21 23:10:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 21 23:11:15.659: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-a 774bb03a-e29d-4498-a75b-857d959d6e81 2274707 0 2020-08-21 23:10:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 21 23:11:15.660: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-a 774bb03a-e29d-4498-a75b-857d959d6e81 2274707 0 2020-08-21 23:10:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 21 23:11:25.667: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-a 774bb03a-e29d-4498-a75b-857d959d6e81 2274737 0 2020-08-21 23:10:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 21 23:11:25.667: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-a 774bb03a-e29d-4498-a75b-857d959d6e81 2274737 0 2020-08-21 23:10:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 21 23:11:35.675: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-b e5668aad-253b-4214-b752-b06ff42411b5 2274767 0 2020-08-21 23:11:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 21 23:11:35.675: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-b e5668aad-253b-4214-b752-b06ff42411b5 2274767 0 2020-08-21 23:11:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 21 23:11:45.682: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-b e5668aad-253b-4214-b752-b06ff42411b5 2274802 0 2020-08-21 23:11:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 21 23:11:45.682: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1159 /api/v1/namespaces/watch-1159/configmaps/e2e-watch-test-configmap-b e5668aad-253b-4214-b752-b06ff42411b5 2274802 0 2020-08-21 23:11:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:11:55.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1159" for this suite. • [SLOW TEST:60.182 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":11,"skipped":102,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:11:55.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:11:55.761: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 21 23:11:58.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4927 create -f -' Aug 21 23:12:02.024: INFO: stderr: "" Aug 21 23:12:02.024: INFO: stdout: "e2e-test-crd-publish-openapi-4597-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 21 23:12:02.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4927 delete e2e-test-crd-publish-openapi-4597-crds test-cr' Aug 21 23:12:02.186: INFO: stderr: "" Aug 21 23:12:02.186: INFO: stdout: "e2e-test-crd-publish-openapi-4597-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 21 23:12:02.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4927 apply -f -' Aug 21 23:12:02.539: INFO: stderr: "" Aug 21 23:12:02.539: INFO: stdout: "e2e-test-crd-publish-openapi-4597-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 21 23:12:02.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4927 delete e2e-test-crd-publish-openapi-4597-crds test-cr' Aug 21 23:12:02.661: INFO: stderr: "" Aug 21 23:12:02.661: INFO: stdout: "e2e-test-crd-publish-openapi-4597-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 21 23:12:02.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4597-crds' Aug 21 23:12:02.890: INFO: stderr: "" Aug 21 23:12:02.890: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4597-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:12:04.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4927" for this suite. • [SLOW TEST:9.113 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":12,"skipped":109,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:12:04.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5802 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5802 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5802 Aug 21 23:12:04.910: INFO: Found 0 stateful pods, waiting for 1 Aug 21 23:12:14.915: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 21 23:12:14.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:12:15.194: INFO: stderr: "I0821 23:12:15.051713 368 log.go:172] (0xc000b634a0) (0xc000a64780) Create stream\nI0821 23:12:15.051774 368 log.go:172] (0xc000b634a0) (0xc000a64780) Stream added, broadcasting: 1\nI0821 23:12:15.056308 368 log.go:172] (0xc000b634a0) Reply frame received for 1\nI0821 23:12:15.056341 368 log.go:172] (0xc000b634a0) (0xc0006c8780) Create stream\nI0821 23:12:15.056349 368 log.go:172] (0xc000b634a0) (0xc0006c8780) Stream added, broadcasting: 3\nI0821 23:12:15.057410 368 log.go:172] (0xc000b634a0) Reply frame received for 3\nI0821 23:12:15.057438 368 log.go:172] (0xc000b634a0) (0xc000515540) Create stream\nI0821 23:12:15.057448 368 log.go:172] (0xc000b634a0) (0xc000515540) Stream added, broadcasting: 5\nI0821 23:12:15.058218 368 log.go:172] (0xc000b634a0) Reply frame received for 5\nI0821 23:12:15.154264 368 log.go:172] (0xc000b634a0) Data frame received for 5\nI0821 23:12:15.154289 368 log.go:172] (0xc000515540) (5) Data frame handling\nI0821 23:12:15.154302 368 log.go:172] (0xc000515540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:12:15.181662 368 log.go:172] (0xc000b634a0) Data frame received for 3\nI0821 23:12:15.181682 368 log.go:172] (0xc0006c8780) (3) Data frame handling\nI0821 23:12:15.181688 368 log.go:172] (0xc0006c8780) (3) Data frame sent\nI0821 23:12:15.181772 368 log.go:172] (0xc000b634a0) Data frame received for 3\nI0821 23:12:15.181805 368 log.go:172] (0xc0006c8780) (3) Data frame handling\nI0821 23:12:15.181895 368 log.go:172] (0xc000b634a0) Data frame received for 5\nI0821 23:12:15.181908 368 log.go:172] (0xc000515540) (5) Data frame handling\nI0821 23:12:15.183996 368 log.go:172] (0xc000b634a0) Data frame received for 1\nI0821 23:12:15.184011 368 log.go:172] (0xc000a64780) (1) Data frame handling\nI0821 23:12:15.184022 368 log.go:172] (0xc000a64780) (1) Data frame sent\nI0821 23:12:15.184034 368 log.go:172] (0xc000b634a0) (0xc000a64780) Stream removed, broadcasting: 1\nI0821 23:12:15.184049 368 log.go:172] (0xc000b634a0) Go away received\nI0821 23:12:15.184543 368 log.go:172] (0xc000b634a0) (0xc000a64780) Stream removed, broadcasting: 1\nI0821 23:12:15.184590 368 log.go:172] (0xc000b634a0) (0xc0006c8780) Stream removed, broadcasting: 3\nI0821 23:12:15.184609 368 log.go:172] (0xc000b634a0) (0xc000515540) Stream removed, broadcasting: 5\n" Aug 21 23:12:15.194: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:12:15.194: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 23:12:15.197: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 21 23:12:25.202: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 23:12:25.202: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 23:12:25.249: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:25.249: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:04 +0000 UTC }] Aug 21 23:12:25.250: INFO: Aug 21 23:12:25.250: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 21 23:12:26.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.961999712s Aug 21 23:12:27.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.956385366s Aug 21 23:12:28.310: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.9522795s Aug 21 23:12:29.314: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.901781955s Aug 21 23:12:30.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.897696661s Aug 21 23:12:31.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.892798854s Aug 21 23:12:32.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.889452596s Aug 21 23:12:33.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.877321123s Aug 21 23:12:34.344: INFO: Verifying statefulset ss doesn't scale past 3 for another 871.912406ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5802 Aug 21 23:12:35.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:12:35.586: INFO: stderr: "I0821 23:12:35.479898 389 log.go:172] (0xc000a18630) (0xc0006ade00) Create stream\nI0821 23:12:35.479982 389 log.go:172] (0xc000a18630) (0xc0006ade00) Stream added, broadcasting: 1\nI0821 23:12:35.482614 389 log.go:172] (0xc000a18630) Reply frame received for 1\nI0821 23:12:35.482664 389 log.go:172] (0xc000a18630) (0xc0005d0780) Create stream\nI0821 23:12:35.482682 389 log.go:172] (0xc000a18630) (0xc0005d0780) Stream added, broadcasting: 3\nI0821 23:12:35.483681 389 log.go:172] (0xc000a18630) Reply frame received for 3\nI0821 23:12:35.483728 389 log.go:172] (0xc000a18630) (0xc000211540) Create stream\nI0821 23:12:35.483743 389 log.go:172] (0xc000a18630) (0xc000211540) Stream added, broadcasting: 5\nI0821 23:12:35.484670 389 log.go:172] (0xc000a18630) Reply frame received for 5\nI0821 23:12:35.576001 389 log.go:172] (0xc000a18630) Data frame received for 3\nI0821 23:12:35.576052 389 log.go:172] (0xc0005d0780) (3) Data frame handling\nI0821 23:12:35.576067 389 log.go:172] (0xc0005d0780) (3) Data frame sent\nI0821 23:12:35.576075 389 log.go:172] (0xc000a18630) Data frame received for 3\nI0821 23:12:35.576083 389 log.go:172] (0xc0005d0780) (3) Data frame handling\nI0821 23:12:35.576129 389 log.go:172] (0xc000a18630) Data frame received for 5\nI0821 23:12:35.576153 389 log.go:172] (0xc000211540) (5) Data frame handling\nI0821 23:12:35.576166 389 log.go:172] (0xc000211540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 23:12:35.576177 389 log.go:172] (0xc000a18630) Data frame received for 5\nI0821 23:12:35.576213 389 log.go:172] (0xc000211540) (5) Data frame handling\nI0821 23:12:35.577618 389 log.go:172] (0xc000a18630) Data frame received for 1\nI0821 23:12:35.577637 389 log.go:172] (0xc0006ade00) (1) Data frame handling\nI0821 23:12:35.577651 389 log.go:172] (0xc0006ade00) (1) Data frame sent\nI0821 23:12:35.577664 389 log.go:172] (0xc000a18630) (0xc0006ade00) Stream removed, broadcasting: 1\nI0821 23:12:35.577683 389 log.go:172] (0xc000a18630) Go away received\nI0821 23:12:35.577965 389 log.go:172] (0xc000a18630) (0xc0006ade00) Stream removed, broadcasting: 1\nI0821 23:12:35.577987 389 log.go:172] (0xc000a18630) (0xc0005d0780) Stream removed, broadcasting: 3\nI0821 23:12:35.577997 389 log.go:172] (0xc000a18630) (0xc000211540) Stream removed, broadcasting: 5\n" Aug 21 23:12:35.586: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 23:12:35.586: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 23:12:35.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:12:35.784: INFO: stderr: "I0821 23:12:35.703931 413 log.go:172] (0xc000bd8790) (0xc000bce320) Create stream\nI0821 23:12:35.703997 413 log.go:172] (0xc000bd8790) (0xc000bce320) Stream added, broadcasting: 1\nI0821 23:12:35.707508 413 log.go:172] (0xc000bd8790) Reply frame received for 1\nI0821 23:12:35.707571 413 log.go:172] (0xc000bd8790) (0xc000a12000) Create stream\nI0821 23:12:35.707593 413 log.go:172] (0xc000bd8790) (0xc000a12000) Stream added, broadcasting: 3\nI0821 23:12:35.708429 413 log.go:172] (0xc000bd8790) Reply frame received for 3\nI0821 23:12:35.708460 413 log.go:172] (0xc000bd8790) (0xc000a120a0) Create stream\nI0821 23:12:35.708469 413 log.go:172] (0xc000bd8790) (0xc000a120a0) Stream added, broadcasting: 5\nI0821 23:12:35.709465 413 log.go:172] (0xc000bd8790) Reply frame received for 5\nI0821 23:12:35.772400 413 log.go:172] (0xc000bd8790) Data frame received for 5\nI0821 23:12:35.772436 413 log.go:172] (0xc000a120a0) (5) Data frame handling\nI0821 23:12:35.772450 413 log.go:172] (0xc000a120a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 23:12:35.772470 413 log.go:172] (0xc000bd8790) Data frame received for 3\nI0821 23:12:35.772479 413 log.go:172] (0xc000a12000) (3) Data frame handling\nI0821 23:12:35.772489 413 log.go:172] (0xc000a12000) (3) Data frame sent\nI0821 23:12:35.772498 413 log.go:172] (0xc000bd8790) Data frame received for 3\nI0821 23:12:35.772507 413 log.go:172] (0xc000a12000) (3) Data frame handling\nI0821 23:12:35.772592 413 log.go:172] (0xc000bd8790) Data frame received for 5\nI0821 23:12:35.772614 413 log.go:172] (0xc000a120a0) (5) Data frame handling\nI0821 23:12:35.774539 413 log.go:172] (0xc000bd8790) Data frame received for 1\nI0821 23:12:35.774567 413 log.go:172] (0xc000bce320) (1) Data frame handling\nI0821 23:12:35.774587 413 log.go:172] (0xc000bce320) (1) Data frame sent\nI0821 23:12:35.774604 413 log.go:172] (0xc000bd8790) (0xc000bce320) Stream removed, broadcasting: 1\nI0821 23:12:35.774623 413 log.go:172] (0xc000bd8790) Go away received\nI0821 23:12:35.775166 413 log.go:172] (0xc000bd8790) (0xc000bce320) Stream removed, broadcasting: 1\nI0821 23:12:35.775189 413 log.go:172] (0xc000bd8790) (0xc000a12000) Stream removed, broadcasting: 3\nI0821 23:12:35.775200 413 log.go:172] (0xc000bd8790) (0xc000a120a0) Stream removed, broadcasting: 5\n" Aug 21 23:12:35.784: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 23:12:35.784: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 23:12:35.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:12:35.993: INFO: stderr: "I0821 23:12:35.913774 436 log.go:172] (0xc0003c2630) (0xc0005b3f40) Create stream\nI0821 23:12:35.913821 436 log.go:172] (0xc0003c2630) (0xc0005b3f40) Stream added, broadcasting: 1\nI0821 23:12:35.916586 436 log.go:172] (0xc0003c2630) Reply frame received for 1\nI0821 23:12:35.916630 436 log.go:172] (0xc0003c2630) (0xc0004f4780) Create stream\nI0821 23:12:35.916655 436 log.go:172] (0xc0003c2630) (0xc0004f4780) Stream added, broadcasting: 3\nI0821 23:12:35.918067 436 log.go:172] (0xc0003c2630) Reply frame received for 3\nI0821 23:12:35.918125 436 log.go:172] (0xc0003c2630) (0xc00072eaa0) Create stream\nI0821 23:12:35.918145 436 log.go:172] (0xc0003c2630) (0xc00072eaa0) Stream added, broadcasting: 5\nI0821 23:12:35.919210 436 log.go:172] (0xc0003c2630) Reply frame received for 5\nI0821 23:12:35.982804 436 log.go:172] (0xc0003c2630) Data frame received for 3\nI0821 23:12:35.982839 436 log.go:172] (0xc0004f4780) (3) Data frame handling\nI0821 23:12:35.982868 436 log.go:172] (0xc0004f4780) (3) Data frame sent\nI0821 23:12:35.982885 436 log.go:172] (0xc0003c2630) Data frame received for 3\nI0821 23:12:35.982898 436 log.go:172] (0xc0004f4780) (3) Data frame handling\nI0821 23:12:35.983080 436 log.go:172] (0xc0003c2630) Data frame received for 5\nI0821 23:12:35.983115 436 log.go:172] (0xc00072eaa0) (5) Data frame handling\nI0821 23:12:35.983136 436 log.go:172] (0xc00072eaa0) (5) Data frame sent\nI0821 23:12:35.983171 436 log.go:172] (0xc0003c2630) Data frame received for 5\nI0821 23:12:35.983199 436 log.go:172] (0xc00072eaa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 23:12:35.984932 436 log.go:172] (0xc0003c2630) Data frame received for 1\nI0821 23:12:35.984952 436 log.go:172] (0xc0005b3f40) (1) Data frame handling\nI0821 23:12:35.984973 436 log.go:172] (0xc0005b3f40) (1) Data frame sent\nI0821 23:12:35.985199 436 log.go:172] (0xc0003c2630) (0xc0005b3f40) Stream removed, broadcasting: 1\nI0821 23:12:35.985317 436 log.go:172] (0xc0003c2630) Go away received\nI0821 23:12:35.985550 436 log.go:172] (0xc0003c2630) (0xc0005b3f40) Stream removed, broadcasting: 1\nI0821 23:12:35.985577 436 log.go:172] (0xc0003c2630) (0xc0004f4780) Stream removed, broadcasting: 3\nI0821 23:12:35.985604 436 log.go:172] (0xc0003c2630) (0xc00072eaa0) Stream removed, broadcasting: 5\n" Aug 21 23:12:35.993: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 23:12:35.993: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 23:12:35.997: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:12:35.997: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:12:35.997: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 21 23:12:35.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:12:36.198: INFO: stderr: "I0821 23:12:36.129208 458 log.go:172] (0xc0004ed130) (0xc0006999a0) Create stream\nI0821 23:12:36.129283 458 log.go:172] (0xc0004ed130) (0xc0006999a0) Stream added, broadcasting: 1\nI0821 23:12:36.131498 458 log.go:172] (0xc0004ed130) Reply frame received for 1\nI0821 23:12:36.131528 458 log.go:172] (0xc0004ed130) (0xc000699b80) Create stream\nI0821 23:12:36.131535 458 log.go:172] (0xc0004ed130) (0xc000699b80) Stream added, broadcasting: 3\nI0821 23:12:36.132391 458 log.go:172] (0xc0004ed130) Reply frame received for 3\nI0821 23:12:36.132444 458 log.go:172] (0xc0004ed130) (0xc00058a000) Create stream\nI0821 23:12:36.132471 458 log.go:172] (0xc0004ed130) (0xc00058a000) Stream added, broadcasting: 5\nI0821 23:12:36.133474 458 log.go:172] (0xc0004ed130) Reply frame received for 5\nI0821 23:12:36.191843 458 log.go:172] (0xc0004ed130) Data frame received for 3\nI0821 23:12:36.191871 458 log.go:172] (0xc000699b80) (3) Data frame handling\nI0821 23:12:36.191879 458 log.go:172] (0xc000699b80) (3) Data frame sent\nI0821 23:12:36.191884 458 log.go:172] (0xc0004ed130) Data frame received for 3\nI0821 23:12:36.191888 458 log.go:172] (0xc000699b80) (3) Data frame handling\nI0821 23:12:36.191915 458 log.go:172] (0xc0004ed130) Data frame received for 5\nI0821 23:12:36.191925 458 log.go:172] (0xc00058a000) (5) Data frame handling\nI0821 23:12:36.191930 458 log.go:172] (0xc00058a000) (5) Data frame sent\nI0821 23:12:36.191935 458 log.go:172] (0xc0004ed130) Data frame received for 5\nI0821 23:12:36.191939 458 log.go:172] (0xc00058a000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:12:36.193484 458 log.go:172] (0xc0004ed130) Data frame received for 1\nI0821 23:12:36.193498 458 log.go:172] (0xc0006999a0) (1) Data frame handling\nI0821 23:12:36.193511 458 log.go:172] (0xc0006999a0) (1) Data frame sent\nI0821 23:12:36.193524 458 log.go:172] (0xc0004ed130) (0xc0006999a0) Stream removed, broadcasting: 1\nI0821 23:12:36.193536 458 log.go:172] (0xc0004ed130) Go away received\nI0821 23:12:36.193910 458 log.go:172] (0xc0004ed130) (0xc0006999a0) Stream removed, broadcasting: 1\nI0821 23:12:36.193934 458 log.go:172] (0xc0004ed130) (0xc000699b80) Stream removed, broadcasting: 3\nI0821 23:12:36.193945 458 log.go:172] (0xc0004ed130) (0xc00058a000) Stream removed, broadcasting: 5\n" Aug 21 23:12:36.198: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:12:36.198: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 23:12:36.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:12:36.484: INFO: stderr: "I0821 23:12:36.366130 481 log.go:172] (0xc000978bb0) (0xc000a28000) Create stream\nI0821 23:12:36.366182 481 log.go:172] (0xc000978bb0) (0xc000a28000) Stream added, broadcasting: 1\nI0821 23:12:36.368582 481 log.go:172] (0xc000978bb0) Reply frame received for 1\nI0821 23:12:36.368632 481 log.go:172] (0xc000978bb0) (0xc00099e0a0) Create stream\nI0821 23:12:36.368645 481 log.go:172] (0xc000978bb0) (0xc00099e0a0) Stream added, broadcasting: 3\nI0821 23:12:36.369579 481 log.go:172] (0xc000978bb0) Reply frame received for 3\nI0821 23:12:36.369606 481 log.go:172] (0xc000978bb0) (0xc00099e140) Create stream\nI0821 23:12:36.369614 481 log.go:172] (0xc000978bb0) (0xc00099e140) Stream added, broadcasting: 5\nI0821 23:12:36.370487 481 log.go:172] (0xc000978bb0) Reply frame received for 5\nI0821 23:12:36.434888 481 log.go:172] (0xc000978bb0) Data frame received for 5\nI0821 23:12:36.434923 481 log.go:172] (0xc00099e140) (5) Data frame handling\nI0821 23:12:36.434945 481 log.go:172] (0xc00099e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:12:36.474149 481 log.go:172] (0xc000978bb0) Data frame received for 3\nI0821 23:12:36.474188 481 log.go:172] (0xc00099e0a0) (3) Data frame handling\nI0821 23:12:36.474223 481 log.go:172] (0xc00099e0a0) (3) Data frame sent\nI0821 23:12:36.474242 481 log.go:172] (0xc000978bb0) Data frame received for 3\nI0821 23:12:36.474262 481 log.go:172] (0xc00099e0a0) (3) Data frame handling\nI0821 23:12:36.474484 481 log.go:172] (0xc000978bb0) Data frame received for 5\nI0821 23:12:36.474504 481 log.go:172] (0xc00099e140) (5) Data frame handling\nI0821 23:12:36.476135 481 log.go:172] (0xc000978bb0) Data frame received for 1\nI0821 23:12:36.476158 481 log.go:172] (0xc000a28000) (1) Data frame handling\nI0821 23:12:36.476189 481 log.go:172] (0xc000a28000) (1) Data frame sent\nI0821 23:12:36.476266 481 log.go:172] (0xc000978bb0) (0xc000a28000) Stream removed, broadcasting: 1\nI0821 23:12:36.476338 481 log.go:172] (0xc000978bb0) Go away received\nI0821 23:12:36.476710 481 log.go:172] (0xc000978bb0) (0xc000a28000) Stream removed, broadcasting: 1\nI0821 23:12:36.476838 481 log.go:172] (0xc000978bb0) (0xc00099e0a0) Stream removed, broadcasting: 3\nI0821 23:12:36.476859 481 log.go:172] (0xc000978bb0) (0xc00099e140) Stream removed, broadcasting: 5\n" Aug 21 23:12:36.484: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:12:36.484: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 23:12:36.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:12:36.739: INFO: stderr: "I0821 23:12:36.609729 500 log.go:172] (0xc0009f4b00) (0xc000651a40) Create stream\nI0821 23:12:36.609812 500 log.go:172] (0xc0009f4b00) (0xc000651a40) Stream added, broadcasting: 1\nI0821 23:12:36.612638 500 log.go:172] (0xc0009f4b00) Reply frame received for 1\nI0821 23:12:36.612681 500 log.go:172] (0xc0009f4b00) (0xc000651c20) Create stream\nI0821 23:12:36.612693 500 log.go:172] (0xc0009f4b00) (0xc000651c20) Stream added, broadcasting: 3\nI0821 23:12:36.613895 500 log.go:172] (0xc0009f4b00) Reply frame received for 3\nI0821 23:12:36.613962 500 log.go:172] (0xc0009f4b00) (0xc0009ac000) Create stream\nI0821 23:12:36.613994 500 log.go:172] (0xc0009f4b00) (0xc0009ac000) Stream added, broadcasting: 5\nI0821 23:12:36.614988 500 log.go:172] (0xc0009f4b00) Reply frame received for 5\nI0821 23:12:36.687953 500 log.go:172] (0xc0009f4b00) Data frame received for 5\nI0821 23:12:36.687987 500 log.go:172] (0xc0009ac000) (5) Data frame handling\nI0821 23:12:36.688019 500 log.go:172] (0xc0009ac000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:12:36.729198 500 log.go:172] (0xc0009f4b00) Data frame received for 3\nI0821 23:12:36.729247 500 log.go:172] (0xc000651c20) (3) Data frame handling\nI0821 23:12:36.729278 500 log.go:172] (0xc000651c20) (3) Data frame sent\nI0821 23:12:36.729389 500 log.go:172] (0xc0009f4b00) Data frame received for 3\nI0821 23:12:36.729422 500 log.go:172] (0xc000651c20) (3) Data frame handling\nI0821 23:12:36.729618 500 log.go:172] (0xc0009f4b00) Data frame received for 5\nI0821 23:12:36.729648 500 log.go:172] (0xc0009ac000) (5) Data frame handling\nI0821 23:12:36.731332 500 log.go:172] (0xc0009f4b00) Data frame received for 1\nI0821 23:12:36.731368 500 log.go:172] (0xc000651a40) (1) Data frame handling\nI0821 23:12:36.731409 500 log.go:172] (0xc000651a40) (1) Data frame sent\nI0821 23:12:36.731443 500 log.go:172] (0xc0009f4b00) (0xc000651a40) Stream removed, broadcasting: 1\nI0821 23:12:36.731522 500 log.go:172] (0xc0009f4b00) Go away received\nI0821 23:12:36.731941 500 log.go:172] (0xc0009f4b00) (0xc000651a40) Stream removed, broadcasting: 1\nI0821 23:12:36.731963 500 log.go:172] (0xc0009f4b00) (0xc000651c20) Stream removed, broadcasting: 3\nI0821 23:12:36.731979 500 log.go:172] (0xc0009f4b00) (0xc0009ac000) Stream removed, broadcasting: 5\n" Aug 21 23:12:36.739: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:12:36.739: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 23:12:36.739: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 23:12:36.758: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Aug 21 23:12:46.766: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 23:12:46.766: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 21 23:12:46.766: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 21 23:12:46.823: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:46.823: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:04 +0000 UTC }] Aug 21 23:12:46.823: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:46.823: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:46.823: INFO: Aug 21 23:12:46.823: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 21 23:12:47.827: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:47.828: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:04 +0000 UTC }] Aug 21 23:12:47.828: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:47.828: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:47.828: INFO: Aug 21 23:12:47.828: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 21 23:12:48.831: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:48.831: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:04 +0000 UTC }] Aug 21 23:12:48.831: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:48.831: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:48.831: INFO: Aug 21 23:12:48.831: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 21 23:12:49.839: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:49.839: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:49.839: INFO: Aug 21 23:12:49.839: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 23:12:50.843: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:50.843: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:50.843: INFO: Aug 21 23:12:50.843: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 23:12:51.857: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:51.857: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:51.857: INFO: Aug 21 23:12:51.857: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 23:12:52.862: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:52.862: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:52.862: INFO: Aug 21 23:12:52.862: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 23:12:53.866: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:53.866: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:53.867: INFO: Aug 21 23:12:53.867: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 23:12:54.871: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:54.871: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:54.871: INFO: Aug 21 23:12:54.871: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 21 23:12:55.875: INFO: POD NODE PHASE GRACE CONDITIONS Aug 21 23:12:55.875: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 23:12:25 +0000 UTC }] Aug 21 23:12:55.875: INFO: Aug 21 23:12:55.875: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5802 Aug 21 23:12:56.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:12:57.026: INFO: rc: 1 Aug 21 23:12:57.026: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Aug 21 23:13:07.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:13:07.134: INFO: rc: 1 Aug 21 23:13:07.134: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:13:17.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:13:17.229: INFO: rc: 1 Aug 21 23:13:17.229: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:13:27.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:13:27.315: INFO: rc: 1 Aug 21 23:13:27.315: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:13:37.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:13:37.412: INFO: rc: 1 Aug 21 23:13:37.413: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:13:47.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:13:47.509: INFO: rc: 1 Aug 21 23:13:47.509: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:13:57.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:13:57.676: INFO: rc: 1 Aug 21 23:13:57.677: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:14:07.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:14:07.775: INFO: rc: 1 Aug 21 23:14:07.775: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:14:17.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:14:17.874: INFO: rc: 1 Aug 21 23:14:17.874: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:14:27.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:14:27.971: INFO: rc: 1 Aug 21 23:14:27.971: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:14:37.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:14:38.071: INFO: rc: 1 Aug 21 23:14:38.071: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:14:48.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:14:48.176: INFO: rc: 1 Aug 21 23:14:48.176: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:14:58.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:14:58.283: INFO: rc: 1 Aug 21 23:14:58.283: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:15:08.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:15:08.385: INFO: rc: 1 Aug 21 23:15:08.385: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:15:18.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:15:18.494: INFO: rc: 1 Aug 21 23:15:18.494: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:15:28.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:15:28.606: INFO: rc: 1 Aug 21 23:15:28.606: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:15:38.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:15:38.704: INFO: rc: 1 Aug 21 23:15:38.704: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:15:48.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:15:48.798: INFO: rc: 1 Aug 21 23:15:48.798: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:15:58.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:15:58.906: INFO: rc: 1 Aug 21 23:15:58.906: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:16:08.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:16:09.015: INFO: rc: 1 Aug 21 23:16:09.015: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:16:19.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:16:19.130: INFO: rc: 1 Aug 21 23:16:19.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:16:29.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:16:29.253: INFO: rc: 1 Aug 21 23:16:29.253: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:16:39.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:16:39.357: INFO: rc: 1 Aug 21 23:16:39.357: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:16:49.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:16:49.461: INFO: rc: 1 Aug 21 23:16:49.461: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:16:59.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:16:59.557: INFO: rc: 1 Aug 21 23:16:59.557: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:17:09.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:17:09.665: INFO: rc: 1 Aug 21 23:17:09.665: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:17:19.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:17:19.782: INFO: rc: 1 Aug 21 23:17:19.782: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:17:29.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:17:29.895: INFO: rc: 1 Aug 21 23:17:29.895: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:17:39.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:17:39.992: INFO: rc: 1 Aug 21 23:17:39.992: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:17:49.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:17:50.088: INFO: rc: 1 Aug 21 23:17:50.088: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 21 23:18:00.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5802 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:18:00.203: INFO: rc: 1 Aug 21 23:18:00.203: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Aug 21 23:18:00.203: INFO: Scaling statefulset ss to 0 Aug 21 23:18:00.211: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 21 23:18:00.213: INFO: Deleting all statefulset in ns statefulset-5802 Aug 21 23:18:00.215: INFO: Scaling statefulset ss to 0 Aug 21 23:18:00.222: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 23:18:00.224: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:00.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5802" for this suite. • [SLOW TEST:355.437 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":13,"skipped":122,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:00.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Aug 21 23:18:00.362: INFO: Waiting up to 5m0s for pod "var-expansion-1dc79ec9-1595-495c-8de3-d6a1f73f27c9" in namespace "var-expansion-7733" to be "success or failure" Aug 21 23:18:00.367: INFO: Pod "var-expansion-1dc79ec9-1595-495c-8de3-d6a1f73f27c9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.139148ms Aug 21 23:18:02.374: INFO: Pod "var-expansion-1dc79ec9-1595-495c-8de3-d6a1f73f27c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012626541s Aug 21 23:18:04.378: INFO: Pod "var-expansion-1dc79ec9-1595-495c-8de3-d6a1f73f27c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016638711s STEP: Saw pod success Aug 21 23:18:04.378: INFO: Pod "var-expansion-1dc79ec9-1595-495c-8de3-d6a1f73f27c9" satisfied condition "success or failure" Aug 21 23:18:04.382: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-1dc79ec9-1595-495c-8de3-d6a1f73f27c9 container dapi-container: STEP: delete the pod Aug 21 23:18:04.548: INFO: Waiting for pod var-expansion-1dc79ec9-1595-495c-8de3-d6a1f73f27c9 to disappear Aug 21 23:18:04.683: INFO: Pod var-expansion-1dc79ec9-1595-495c-8de3-d6a1f73f27c9 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:04.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7733" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":138,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:04.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 23:18:04.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d3095f5-69aa-471d-98dd-cae1387980d7" in namespace "projected-2148" to be "success or failure" Aug 21 23:18:04.757: INFO: Pod "downwardapi-volume-6d3095f5-69aa-471d-98dd-cae1387980d7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.497004ms Aug 21 23:18:06.809: INFO: Pod "downwardapi-volume-6d3095f5-69aa-471d-98dd-cae1387980d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065516857s Aug 21 23:18:08.814: INFO: Pod "downwardapi-volume-6d3095f5-69aa-471d-98dd-cae1387980d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07054244s STEP: Saw pod success Aug 21 23:18:08.814: INFO: Pod "downwardapi-volume-6d3095f5-69aa-471d-98dd-cae1387980d7" satisfied condition "success or failure" Aug 21 23:18:08.818: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6d3095f5-69aa-471d-98dd-cae1387980d7 container client-container: STEP: delete the pod Aug 21 23:18:08.853: INFO: Waiting for pod downwardapi-volume-6d3095f5-69aa-471d-98dd-cae1387980d7 to disappear Aug 21 23:18:08.886: INFO: Pod downwardapi-volume-6d3095f5-69aa-471d-98dd-cae1387980d7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:08.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2148" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":141,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:08.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check is all data is printed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:18:09.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Aug 21 23:18:09.179: INFO: stderr: "" Aug 21 23:18:09.179: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:09.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-95" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":16,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:09.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-a792751a-8cac-4bc1-aa4d-36641b7d1a8f STEP: Creating a pod to test consume secrets Aug 21 23:18:09.302: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e3778f5a-c776-40f1-9849-1a0b85a18a71" in namespace "projected-3605" to be "success or failure" Aug 21 23:18:09.313: INFO: Pod "pod-projected-secrets-e3778f5a-c776-40f1-9849-1a0b85a18a71": Phase="Pending", Reason="", readiness=false. Elapsed: 11.300069ms Aug 21 23:18:11.372: INFO: Pod "pod-projected-secrets-e3778f5a-c776-40f1-9849-1a0b85a18a71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069924707s Aug 21 23:18:13.376: INFO: Pod "pod-projected-secrets-e3778f5a-c776-40f1-9849-1a0b85a18a71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07406924s STEP: Saw pod success Aug 21 23:18:13.376: INFO: Pod "pod-projected-secrets-e3778f5a-c776-40f1-9849-1a0b85a18a71" satisfied condition "success or failure" Aug 21 23:18:13.379: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e3778f5a-c776-40f1-9849-1a0b85a18a71 container projected-secret-volume-test: STEP: delete the pod Aug 21 23:18:13.427: INFO: Waiting for pod pod-projected-secrets-e3778f5a-c776-40f1-9849-1a0b85a18a71 to disappear Aug 21 23:18:13.469: INFO: Pod pod-projected-secrets-e3778f5a-c776-40f1-9849-1a0b85a18a71 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:13.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3605" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":180,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:13.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-779fcdee-2c5d-4252-8bb8-899e2c50d2cf STEP: Creating a pod to test consume secrets Aug 21 23:18:13.902: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7f12af17-97e5-4e0f-b8ca-54d59877cba5" in namespace "projected-4593" to be "success or failure" Aug 21 23:18:13.913: INFO: Pod "pod-projected-secrets-7f12af17-97e5-4e0f-b8ca-54d59877cba5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.302922ms Aug 21 23:18:15.959: INFO: Pod "pod-projected-secrets-7f12af17-97e5-4e0f-b8ca-54d59877cba5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056431965s Aug 21 23:18:17.963: INFO: Pod "pod-projected-secrets-7f12af17-97e5-4e0f-b8ca-54d59877cba5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060854293s STEP: Saw pod success Aug 21 23:18:17.963: INFO: Pod "pod-projected-secrets-7f12af17-97e5-4e0f-b8ca-54d59877cba5" satisfied condition "success or failure" Aug 21 23:18:17.966: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-7f12af17-97e5-4e0f-b8ca-54d59877cba5 container projected-secret-volume-test: STEP: delete the pod Aug 21 23:18:18.007: INFO: Waiting for pod pod-projected-secrets-7f12af17-97e5-4e0f-b8ca-54d59877cba5 to disappear Aug 21 23:18:18.014: INFO: Pod pod-projected-secrets-7f12af17-97e5-4e0f-b8ca-54d59877cba5 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:18.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4593" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":185,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:18.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-c8126673-d485-4184-92dc-5ac99025bb1c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c8126673-d485-4184-92dc-5ac99025bb1c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:24.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5368" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:24.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:31.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6323" for this suite. • [SLOW TEST:7.204 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":20,"skipped":208,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:31.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 21 23:18:39.666: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 23:18:39.675: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 23:18:41.675: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 23:18:41.828: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 23:18:43.675: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 23:18:43.679: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 23:18:45.675: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 23:18:45.679: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:45.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7096" for this suite. • [SLOW TEST:14.211 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:45.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-c5dfb818-be5f-4ef3-abe2-cc6a6b7778a4 STEP: Creating a pod to test consume configMaps Aug 21 23:18:45.818: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8374e02c-6cb9-4c15-ba46-3ab4e22f6eac" in namespace "projected-6915" to be "success or failure" Aug 21 23:18:45.824: INFO: Pod "pod-projected-configmaps-8374e02c-6cb9-4c15-ba46-3ab4e22f6eac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146772ms Aug 21 23:18:47.859: INFO: Pod "pod-projected-configmaps-8374e02c-6cb9-4c15-ba46-3ab4e22f6eac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040655713s Aug 21 23:18:49.862: INFO: Pod "pod-projected-configmaps-8374e02c-6cb9-4c15-ba46-3ab4e22f6eac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044613566s STEP: Saw pod success Aug 21 23:18:49.863: INFO: Pod "pod-projected-configmaps-8374e02c-6cb9-4c15-ba46-3ab4e22f6eac" satisfied condition "success or failure" Aug 21 23:18:49.865: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-8374e02c-6cb9-4c15-ba46-3ab4e22f6eac container projected-configmap-volume-test: STEP: delete the pod Aug 21 23:18:49.891: INFO: Waiting for pod pod-projected-configmaps-8374e02c-6cb9-4c15-ba46-3ab4e22f6eac to disappear Aug 21 23:18:50.013: INFO: Pod pod-projected-configmaps-8374e02c-6cb9-4c15-ba46-3ab4e22f6eac no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:50.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6915" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":256,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:50.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526 [It] should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 21 23:18:50.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8442' Aug 21 23:18:50.537: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 21 23:18:50.537: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Aug 21 23:18:50.585: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-4mbc8] Aug 21 23:18:50.585: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-4mbc8" in namespace "kubectl-8442" to be "running and ready" Aug 21 23:18:50.588: INFO: Pod "e2e-test-httpd-rc-4mbc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670867ms Aug 21 23:18:52.672: INFO: Pod "e2e-test-httpd-rc-4mbc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086545299s Aug 21 23:18:54.675: INFO: Pod "e2e-test-httpd-rc-4mbc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089698773s Aug 21 23:18:56.690: INFO: Pod "e2e-test-httpd-rc-4mbc8": Phase="Running", Reason="", readiness=true. Elapsed: 6.104323335s Aug 21 23:18:56.690: INFO: Pod "e2e-test-httpd-rc-4mbc8" satisfied condition "running and ready" Aug 21 23:18:56.690: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-4mbc8] Aug 21 23:18:56.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8442' Aug 21 23:18:56.821: INFO: stderr: "" Aug 21 23:18:56.821: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.119. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.119. Set the 'ServerName' directive globally to suppress this message\n[Fri Aug 21 23:18:53.927552 2020] [mpm_event:notice] [pid 1:tid 139938849356648] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Aug 21 23:18:53.927602 2020] [core:notice] [pid 1:tid 139938849356648] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531 Aug 21 23:18:56.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8442' Aug 21 23:18:56.963: INFO: stderr: "" Aug 21 23:18:56.963: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:18:56.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8442" for this suite. • [SLOW TEST:6.761 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":23,"skipped":268,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:18:56.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8365 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8365 I0821 23:18:57.110469 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8365, replica count: 2 I0821 23:19:00.160911 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 23:19:03.161144 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 21 23:19:03.161: INFO: Creating new exec pod Aug 21 23:19:08.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8365 execpodzbhbp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 21 23:19:08.474: INFO: stderr: "I0821 23:19:08.406710 1233 log.go:172] (0xc000a76000) (0xc0009b4000) Create stream\nI0821 23:19:08.406779 1233 log.go:172] (0xc000a76000) (0xc0009b4000) Stream added, broadcasting: 1\nI0821 23:19:08.410511 1233 log.go:172] (0xc000a76000) Reply frame received for 1\nI0821 23:19:08.410583 1233 log.go:172] (0xc000a76000) (0xc0009b40a0) Create stream\nI0821 23:19:08.410608 1233 log.go:172] (0xc000a76000) (0xc0009b40a0) Stream added, broadcasting: 3\nI0821 23:19:08.412137 1233 log.go:172] (0xc000a76000) Reply frame received for 3\nI0821 23:19:08.412175 1233 log.go:172] (0xc000a76000) (0xc0009b4140) Create stream\nI0821 23:19:08.412184 1233 log.go:172] (0xc000a76000) (0xc0009b4140) Stream added, broadcasting: 5\nI0821 23:19:08.413353 1233 log.go:172] (0xc000a76000) Reply frame received for 5\nI0821 23:19:08.464315 1233 log.go:172] (0xc000a76000) Data frame received for 5\nI0821 23:19:08.464346 1233 log.go:172] (0xc0009b4140) (5) Data frame handling\nI0821 23:19:08.464380 1233 log.go:172] (0xc0009b4140) (5) Data frame sent\nI0821 23:19:08.464407 1233 log.go:172] (0xc000a76000) Data frame received for 5\nI0821 23:19:08.464429 1233 log.go:172] (0xc0009b4140) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0821 23:19:08.464512 1233 log.go:172] (0xc0009b4140) (5) Data frame sent\nI0821 23:19:08.465051 1233 log.go:172] (0xc000a76000) Data frame received for 5\nI0821 23:19:08.465070 1233 log.go:172] (0xc0009b4140) (5) Data frame handling\nI0821 23:19:08.465095 1233 log.go:172] (0xc000a76000) Data frame received for 3\nI0821 23:19:08.465109 1233 log.go:172] (0xc0009b40a0) (3) Data frame handling\nI0821 23:19:08.467016 1233 log.go:172] (0xc000a76000) Data frame received for 1\nI0821 23:19:08.467046 1233 log.go:172] (0xc0009b4000) (1) Data frame handling\nI0821 23:19:08.467061 1233 log.go:172] (0xc0009b4000) (1) Data frame sent\nI0821 23:19:08.467078 1233 log.go:172] (0xc000a76000) (0xc0009b4000) Stream removed, broadcasting: 1\nI0821 23:19:08.467096 1233 log.go:172] (0xc000a76000) Go away received\nI0821 23:19:08.467394 1233 log.go:172] (0xc000a76000) (0xc0009b4000) Stream removed, broadcasting: 1\nI0821 23:19:08.467411 1233 log.go:172] (0xc000a76000) (0xc0009b40a0) Stream removed, broadcasting: 3\nI0821 23:19:08.467420 1233 log.go:172] (0xc000a76000) (0xc0009b4140) Stream removed, broadcasting: 5\n" Aug 21 23:19:08.474: INFO: stdout: "" Aug 21 23:19:08.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8365 execpodzbhbp -- /bin/sh -x -c nc -zv -t -w 2 10.105.139.127 80' Aug 21 23:19:08.679: INFO: stderr: "I0821 23:19:08.601609 1253 log.go:172] (0xc000b84d10) (0xc000a8a500) Create stream\nI0821 23:19:08.601661 1253 log.go:172] (0xc000b84d10) (0xc000a8a500) Stream added, broadcasting: 1\nI0821 23:19:08.604426 1253 log.go:172] (0xc000b84d10) Reply frame received for 1\nI0821 23:19:08.604484 1253 log.go:172] (0xc000b84d10) (0xc000a8a5a0) Create stream\nI0821 23:19:08.604503 1253 log.go:172] (0xc000b84d10) (0xc000a8a5a0) Stream added, broadcasting: 3\nI0821 23:19:08.605568 1253 log.go:172] (0xc000b84d10) Reply frame received for 3\nI0821 23:19:08.605600 1253 log.go:172] (0xc000b84d10) (0xc0009de140) Create stream\nI0821 23:19:08.605609 1253 log.go:172] (0xc000b84d10) (0xc0009de140) Stream added, broadcasting: 5\nI0821 23:19:08.606494 1253 log.go:172] (0xc000b84d10) Reply frame received for 5\nI0821 23:19:08.671201 1253 log.go:172] (0xc000b84d10) Data frame received for 5\nI0821 23:19:08.671220 1253 log.go:172] (0xc0009de140) (5) Data frame handling\nI0821 23:19:08.671226 1253 log.go:172] (0xc0009de140) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.139.127 80\nConnection to 10.105.139.127 80 port [tcp/http] succeeded!\nI0821 23:19:08.671236 1253 log.go:172] (0xc000b84d10) Data frame received for 3\nI0821 23:19:08.671240 1253 log.go:172] (0xc000a8a5a0) (3) Data frame handling\nI0821 23:19:08.671258 1253 log.go:172] (0xc000b84d10) Data frame received for 5\nI0821 23:19:08.671276 1253 log.go:172] (0xc0009de140) (5) Data frame handling\nI0821 23:19:08.672940 1253 log.go:172] (0xc000b84d10) Data frame received for 1\nI0821 23:19:08.672951 1253 log.go:172] (0xc000a8a500) (1) Data frame handling\nI0821 23:19:08.672956 1253 log.go:172] (0xc000a8a500) (1) Data frame sent\nI0821 23:19:08.673023 1253 log.go:172] (0xc000b84d10) (0xc000a8a500) Stream removed, broadcasting: 1\nI0821 23:19:08.673188 1253 log.go:172] (0xc000b84d10) Go away received\nI0821 23:19:08.673275 1253 log.go:172] (0xc000b84d10) (0xc000a8a500) Stream removed, broadcasting: 1\nI0821 23:19:08.673289 1253 log.go:172] (0xc000b84d10) (0xc000a8a5a0) Stream removed, broadcasting: 3\nI0821 23:19:08.673297 1253 log.go:172] (0xc000b84d10) (0xc0009de140) Stream removed, broadcasting: 5\n" Aug 21 23:19:08.680: INFO: stdout: "" Aug 21 23:19:08.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8365 execpodzbhbp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30427' Aug 21 23:19:08.887: INFO: stderr: "I0821 23:19:08.801288 1273 log.go:172] (0xc0000f66e0) (0xc000b6e1e0) Create stream\nI0821 23:19:08.801337 1273 log.go:172] (0xc0000f66e0) (0xc000b6e1e0) Stream added, broadcasting: 1\nI0821 23:19:08.803889 1273 log.go:172] (0xc0000f66e0) Reply frame received for 1\nI0821 23:19:08.803930 1273 log.go:172] (0xc0000f66e0) (0xc000b6e280) Create stream\nI0821 23:19:08.803944 1273 log.go:172] (0xc0000f66e0) (0xc000b6e280) Stream added, broadcasting: 3\nI0821 23:19:08.805027 1273 log.go:172] (0xc0000f66e0) Reply frame received for 3\nI0821 23:19:08.805065 1273 log.go:172] (0xc0000f66e0) (0xc000653cc0) Create stream\nI0821 23:19:08.805078 1273 log.go:172] (0xc0000f66e0) (0xc000653cc0) Stream added, broadcasting: 5\nI0821 23:19:08.805915 1273 log.go:172] (0xc0000f66e0) Reply frame received for 5\nI0821 23:19:08.880852 1273 log.go:172] (0xc0000f66e0) Data frame received for 3\nI0821 23:19:08.880928 1273 log.go:172] (0xc000b6e280) (3) Data frame handling\nI0821 23:19:08.880967 1273 log.go:172] (0xc0000f66e0) Data frame received for 5\nI0821 23:19:08.880994 1273 log.go:172] (0xc000653cc0) (5) Data frame handling\nI0821 23:19:08.881024 1273 log.go:172] (0xc000653cc0) (5) Data frame sent\nI0821 23:19:08.881039 1273 log.go:172] (0xc0000f66e0) Data frame received for 5\nI0821 23:19:08.881051 1273 log.go:172] (0xc000653cc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 30427\nConnection to 172.18.0.6 30427 port [tcp/30427] succeeded!\nI0821 23:19:08.882109 1273 log.go:172] (0xc0000f66e0) Data frame received for 1\nI0821 23:19:08.882148 1273 log.go:172] (0xc000b6e1e0) (1) Data frame handling\nI0821 23:19:08.882175 1273 log.go:172] (0xc000b6e1e0) (1) Data frame sent\nI0821 23:19:08.882229 1273 log.go:172] (0xc0000f66e0) (0xc000b6e1e0) Stream removed, broadcasting: 1\nI0821 23:19:08.882290 1273 log.go:172] (0xc0000f66e0) Go away received\nI0821 23:19:08.882683 1273 log.go:172] (0xc0000f66e0) (0xc000b6e1e0) Stream removed, broadcasting: 1\nI0821 23:19:08.882705 1273 log.go:172] (0xc0000f66e0) (0xc000b6e280) Stream removed, broadcasting: 3\nI0821 23:19:08.882719 1273 log.go:172] (0xc0000f66e0) (0xc000653cc0) Stream removed, broadcasting: 5\n" Aug 21 23:19:08.887: INFO: stdout: "" Aug 21 23:19:08.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8365 execpodzbhbp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 30427' Aug 21 23:19:09.078: INFO: stderr: "I0821 23:19:09.012801 1293 log.go:172] (0xc000ab5810) (0xc000ad2320) Create stream\nI0821 23:19:09.012844 1293 log.go:172] (0xc000ab5810) (0xc000ad2320) Stream added, broadcasting: 1\nI0821 23:19:09.015021 1293 log.go:172] (0xc000ab5810) Reply frame received for 1\nI0821 23:19:09.015052 1293 log.go:172] (0xc000ab5810) (0xc000aca6e0) Create stream\nI0821 23:19:09.015064 1293 log.go:172] (0xc000ab5810) (0xc000aca6e0) Stream added, broadcasting: 3\nI0821 23:19:09.015779 1293 log.go:172] (0xc000ab5810) Reply frame received for 3\nI0821 23:19:09.015819 1293 log.go:172] (0xc000ab5810) (0xc000a8e6e0) Create stream\nI0821 23:19:09.015830 1293 log.go:172] (0xc000ab5810) (0xc000a8e6e0) Stream added, broadcasting: 5\nI0821 23:19:09.016506 1293 log.go:172] (0xc000ab5810) Reply frame received for 5\nI0821 23:19:09.069837 1293 log.go:172] (0xc000ab5810) Data frame received for 5\nI0821 23:19:09.069869 1293 log.go:172] (0xc000a8e6e0) (5) Data frame handling\nI0821 23:19:09.069892 1293 log.go:172] (0xc000a8e6e0) (5) Data frame sent\nI0821 23:19:09.069907 1293 log.go:172] (0xc000ab5810) Data frame received for 5\nI0821 23:19:09.069921 1293 log.go:172] (0xc000a8e6e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.3 30427\nConnection to 172.18.0.3 30427 port [tcp/30427] succeeded!\nI0821 23:19:09.069982 1293 log.go:172] (0xc000ab5810) Data frame received for 3\nI0821 23:19:09.070001 1293 log.go:172] (0xc000aca6e0) (3) Data frame handling\nI0821 23:19:09.071173 1293 log.go:172] (0xc000ab5810) Data frame received for 1\nI0821 23:19:09.071257 1293 log.go:172] (0xc000ad2320) (1) Data frame handling\nI0821 23:19:09.071278 1293 log.go:172] (0xc000ad2320) (1) Data frame sent\nI0821 23:19:09.071287 1293 log.go:172] (0xc000ab5810) (0xc000ad2320) Stream removed, broadcasting: 1\nI0821 23:19:09.071300 1293 log.go:172] (0xc000ab5810) Go away received\nI0821 23:19:09.071630 1293 log.go:172] (0xc000ab5810) (0xc000ad2320) Stream removed, broadcasting: 1\nI0821 23:19:09.071643 1293 log.go:172] (0xc000ab5810) (0xc000aca6e0) Stream removed, broadcasting: 3\nI0821 23:19:09.071649 1293 log.go:172] (0xc000ab5810) (0xc000a8e6e0) Stream removed, broadcasting: 5\n" Aug 21 23:19:09.078: INFO: stdout: "" Aug 21 23:19:09.078: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:19:09.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8365" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.157 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":24,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:19:09.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 21 23:19:09.198: INFO: Waiting up to 5m0s for pod "pod-70dee103-d118-428b-9a0e-fd91e79da5b9" in namespace "emptydir-7481" to be "success or failure" Aug 21 23:19:09.202: INFO: Pod "pod-70dee103-d118-428b-9a0e-fd91e79da5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670217ms Aug 21 23:19:11.206: INFO: Pod "pod-70dee103-d118-428b-9a0e-fd91e79da5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007841809s Aug 21 23:19:13.209: INFO: Pod "pod-70dee103-d118-428b-9a0e-fd91e79da5b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011327388s STEP: Saw pod success Aug 21 23:19:13.209: INFO: Pod "pod-70dee103-d118-428b-9a0e-fd91e79da5b9" satisfied condition "success or failure" Aug 21 23:19:13.212: INFO: Trying to get logs from node jerma-worker2 pod pod-70dee103-d118-428b-9a0e-fd91e79da5b9 container test-container: STEP: delete the pod Aug 21 23:19:13.259: INFO: Waiting for pod pod-70dee103-d118-428b-9a0e-fd91e79da5b9 to disappear Aug 21 23:19:13.262: INFO: Pod pod-70dee103-d118-428b-9a0e-fd91e79da5b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:19:13.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7481" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":317,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:19:13.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 21 23:19:13.971: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 21 23:19:16.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648753, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648753, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648754, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648753, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 23:19:18.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648753, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648753, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648754, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648753, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 23:19:20.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648753, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648753, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648754, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648753, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 23:19:23.211: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:19:23.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:19:24.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1572" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.172 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":26,"skipped":325,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:19:24.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7192 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7192 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7192 Aug 21 23:19:24.567: INFO: Found 0 stateful pods, waiting for 1 Aug 21 23:19:34.572: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Aug 21 23:19:44.572: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 21 23:19:44.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7192 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:19:44.980: INFO: stderr: "I0821 23:19:44.703674 1313 log.go:172] (0xc0000f62c0) (0xc00065e780) Create stream\nI0821 23:19:44.703745 1313 log.go:172] (0xc0000f62c0) (0xc00065e780) Stream added, broadcasting: 1\nI0821 23:19:44.706666 1313 log.go:172] (0xc0000f62c0) Reply frame received for 1\nI0821 23:19:44.706765 1313 log.go:172] (0xc0000f62c0) (0xc00044b540) Create stream\nI0821 23:19:44.706793 1313 log.go:172] (0xc0000f62c0) (0xc00044b540) Stream added, broadcasting: 3\nI0821 23:19:44.707777 1313 log.go:172] (0xc0000f62c0) Reply frame received for 3\nI0821 23:19:44.707823 1313 log.go:172] (0xc0000f62c0) (0xc000652000) Create stream\nI0821 23:19:44.707839 1313 log.go:172] (0xc0000f62c0) (0xc000652000) Stream added, broadcasting: 5\nI0821 23:19:44.708697 1313 log.go:172] (0xc0000f62c0) Reply frame received for 5\nI0821 23:19:44.772608 1313 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0821 23:19:44.772654 1313 log.go:172] (0xc000652000) (5) Data frame handling\nI0821 23:19:44.772691 1313 log.go:172] (0xc000652000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:19:44.970623 1313 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0821 23:19:44.970668 1313 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0821 23:19:44.970702 1313 log.go:172] (0xc00044b540) (3) Data frame handling\nI0821 23:19:44.970715 1313 log.go:172] (0xc00044b540) (3) Data frame sent\nI0821 23:19:44.970723 1313 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0821 23:19:44.970728 1313 log.go:172] (0xc00044b540) (3) Data frame handling\nI0821 23:19:44.970767 1313 log.go:172] (0xc000652000) (5) Data frame handling\nI0821 23:19:44.972279 1313 log.go:172] (0xc0000f62c0) Data frame received for 1\nI0821 23:19:44.972296 1313 log.go:172] (0xc00065e780) (1) Data frame handling\nI0821 23:19:44.972304 1313 log.go:172] (0xc00065e780) (1) Data frame sent\nI0821 23:19:44.972315 1313 log.go:172] (0xc0000f62c0) (0xc00065e780) Stream removed, broadcasting: 1\nI0821 23:19:44.972331 1313 log.go:172] (0xc0000f62c0) Go away received\nI0821 23:19:44.972922 1313 log.go:172] (0xc0000f62c0) (0xc00065e780) Stream removed, broadcasting: 1\nI0821 23:19:44.972939 1313 log.go:172] (0xc0000f62c0) (0xc00044b540) Stream removed, broadcasting: 3\nI0821 23:19:44.972947 1313 log.go:172] (0xc0000f62c0) (0xc000652000) Stream removed, broadcasting: 5\n" Aug 21 23:19:44.980: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:19:44.980: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 23:19:44.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 21 23:19:54.988: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 23:19:54.988: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 23:19:55.007: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999951s Aug 21 23:19:56.011: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990871973s Aug 21 23:19:57.016: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986299527s Aug 21 23:19:58.020: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981660776s Aug 21 23:19:59.025: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.976979599s Aug 21 23:20:00.086: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972416578s Aug 21 23:20:01.315: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.911792255s Aug 21 23:20:02.330: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.682697131s Aug 21 23:20:03.335: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.667026251s Aug 21 23:20:04.340: INFO: Verifying statefulset ss doesn't scale past 1 for another 662.537527ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7192 Aug 21 23:20:05.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7192 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:20:05.546: INFO: stderr: "I0821 23:20:05.475639 1334 log.go:172] (0xc0009ae790) (0xc000601b80) Create stream\nI0821 23:20:05.475695 1334 log.go:172] (0xc0009ae790) (0xc000601b80) Stream added, broadcasting: 1\nI0821 23:20:05.478302 1334 log.go:172] (0xc0009ae790) Reply frame received for 1\nI0821 23:20:05.478344 1334 log.go:172] (0xc0009ae790) (0xc000aa2000) Create stream\nI0821 23:20:05.478357 1334 log.go:172] (0xc0009ae790) (0xc000aa2000) Stream added, broadcasting: 3\nI0821 23:20:05.479238 1334 log.go:172] (0xc0009ae790) Reply frame received for 3\nI0821 23:20:05.479275 1334 log.go:172] (0xc0009ae790) (0xc000601d60) Create stream\nI0821 23:20:05.479284 1334 log.go:172] (0xc0009ae790) (0xc000601d60) Stream added, broadcasting: 5\nI0821 23:20:05.480122 1334 log.go:172] (0xc0009ae790) Reply frame received for 5\nI0821 23:20:05.537612 1334 log.go:172] (0xc0009ae790) Data frame received for 5\nI0821 23:20:05.537640 1334 log.go:172] (0xc000601d60) (5) Data frame handling\nI0821 23:20:05.537648 1334 log.go:172] (0xc000601d60) (5) Data frame sent\nI0821 23:20:05.537654 1334 log.go:172] (0xc0009ae790) Data frame received for 5\nI0821 23:20:05.537659 1334 log.go:172] (0xc000601d60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 23:20:05.537675 1334 log.go:172] (0xc0009ae790) Data frame received for 3\nI0821 23:20:05.537680 1334 log.go:172] (0xc000aa2000) (3) Data frame handling\nI0821 23:20:05.537687 1334 log.go:172] (0xc000aa2000) (3) Data frame sent\nI0821 23:20:05.537696 1334 log.go:172] (0xc0009ae790) Data frame received for 3\nI0821 23:20:05.537701 1334 log.go:172] (0xc000aa2000) (3) Data frame handling\nI0821 23:20:05.538914 1334 log.go:172] (0xc0009ae790) Data frame received for 1\nI0821 23:20:05.538939 1334 log.go:172] (0xc000601b80) (1) Data frame handling\nI0821 23:20:05.538956 1334 log.go:172] (0xc000601b80) (1) Data frame sent\nI0821 23:20:05.538970 1334 log.go:172] (0xc0009ae790) (0xc000601b80) Stream removed, broadcasting: 1\nI0821 23:20:05.538981 1334 log.go:172] (0xc0009ae790) Go away received\nI0821 23:20:05.539298 1334 log.go:172] (0xc0009ae790) (0xc000601b80) Stream removed, broadcasting: 1\nI0821 23:20:05.539311 1334 log.go:172] (0xc0009ae790) (0xc000aa2000) Stream removed, broadcasting: 3\nI0821 23:20:05.539318 1334 log.go:172] (0xc0009ae790) (0xc000601d60) Stream removed, broadcasting: 5\n" Aug 21 23:20:05.546: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 23:20:05.546: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 23:20:05.550: INFO: Found 1 stateful pods, waiting for 3 Aug 21 23:20:15.554: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:20:15.554: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:20:15.554: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 21 23:20:25.553: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:20:25.553: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 23:20:25.553: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 21 23:20:25.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7192 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:20:25.747: INFO: stderr: "I0821 23:20:25.683323 1354 log.go:172] (0xc00098a0b0) (0xc0006d1b80) Create stream\nI0821 23:20:25.683381 1354 log.go:172] (0xc00098a0b0) (0xc0006d1b80) Stream added, broadcasting: 1\nI0821 23:20:25.685899 1354 log.go:172] (0xc00098a0b0) Reply frame received for 1\nI0821 23:20:25.685922 1354 log.go:172] (0xc00098a0b0) (0xc000972000) Create stream\nI0821 23:20:25.685928 1354 log.go:172] (0xc00098a0b0) (0xc000972000) Stream added, broadcasting: 3\nI0821 23:20:25.686801 1354 log.go:172] (0xc00098a0b0) Reply frame received for 3\nI0821 23:20:25.686854 1354 log.go:172] (0xc00098a0b0) (0xc0006d1d60) Create stream\nI0821 23:20:25.686895 1354 log.go:172] (0xc00098a0b0) (0xc0006d1d60) Stream added, broadcasting: 5\nI0821 23:20:25.687694 1354 log.go:172] (0xc00098a0b0) Reply frame received for 5\nI0821 23:20:25.744290 1354 log.go:172] (0xc00098a0b0) Data frame received for 5\nI0821 23:20:25.744314 1354 log.go:172] (0xc0006d1d60) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:20:25.744335 1354 log.go:172] (0xc00098a0b0) Data frame received for 3\nI0821 23:20:25.744371 1354 log.go:172] (0xc000972000) (3) Data frame handling\nI0821 23:20:25.744387 1354 log.go:172] (0xc000972000) (3) Data frame sent\nI0821 23:20:25.744400 1354 log.go:172] (0xc00098a0b0) Data frame received for 3\nI0821 23:20:25.744411 1354 log.go:172] (0xc000972000) (3) Data frame handling\nI0821 23:20:25.744460 1354 log.go:172] (0xc0006d1d60) (5) Data frame sent\nI0821 23:20:25.744485 1354 log.go:172] (0xc00098a0b0) Data frame received for 5\nI0821 23:20:25.744497 1354 log.go:172] (0xc0006d1d60) (5) Data frame handling\nI0821 23:20:25.744921 1354 log.go:172] (0xc00098a0b0) Data frame received for 1\nI0821 23:20:25.744944 1354 log.go:172] (0xc0006d1b80) (1) Data frame handling\nI0821 23:20:25.744975 1354 log.go:172] (0xc0006d1b80) (1) Data frame sent\nI0821 23:20:25.745115 1354 log.go:172] (0xc00098a0b0) (0xc0006d1b80) Stream removed, broadcasting: 1\nI0821 23:20:25.745158 1354 log.go:172] (0xc00098a0b0) Go away received\nI0821 23:20:25.745402 1354 log.go:172] (0xc00098a0b0) (0xc0006d1b80) Stream removed, broadcasting: 1\nI0821 23:20:25.745420 1354 log.go:172] (0xc00098a0b0) (0xc000972000) Stream removed, broadcasting: 3\nI0821 23:20:25.745428 1354 log.go:172] (0xc00098a0b0) (0xc0006d1d60) Stream removed, broadcasting: 5\n" Aug 21 23:20:25.747: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:20:25.747: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 23:20:25.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7192 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:20:26.034: INFO: stderr: "I0821 23:20:25.923755 1370 log.go:172] (0xc000105290) (0xc0005dfa40) Create stream\nI0821 23:20:25.923836 1370 log.go:172] (0xc000105290) (0xc0005dfa40) Stream added, broadcasting: 1\nI0821 23:20:25.925990 1370 log.go:172] (0xc000105290) Reply frame received for 1\nI0821 23:20:25.926029 1370 log.go:172] (0xc000105290) (0xc0007aa000) Create stream\nI0821 23:20:25.926048 1370 log.go:172] (0xc000105290) (0xc0007aa000) Stream added, broadcasting: 3\nI0821 23:20:25.926851 1370 log.go:172] (0xc000105290) Reply frame received for 3\nI0821 23:20:25.926894 1370 log.go:172] (0xc000105290) (0xc0005dfc20) Create stream\nI0821 23:20:25.926907 1370 log.go:172] (0xc000105290) (0xc0005dfc20) Stream added, broadcasting: 5\nI0821 23:20:25.927863 1370 log.go:172] (0xc000105290) Reply frame received for 5\nI0821 23:20:25.982756 1370 log.go:172] (0xc000105290) Data frame received for 5\nI0821 23:20:25.982773 1370 log.go:172] (0xc0005dfc20) (5) Data frame handling\nI0821 23:20:25.982783 1370 log.go:172] (0xc0005dfc20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:20:26.026357 1370 log.go:172] (0xc000105290) Data frame received for 3\nI0821 23:20:26.026375 1370 log.go:172] (0xc0007aa000) (3) Data frame handling\nI0821 23:20:26.026388 1370 log.go:172] (0xc0007aa000) (3) Data frame sent\nI0821 23:20:26.026395 1370 log.go:172] (0xc000105290) Data frame received for 3\nI0821 23:20:26.026400 1370 log.go:172] (0xc0007aa000) (3) Data frame handling\nI0821 23:20:26.026493 1370 log.go:172] (0xc000105290) Data frame received for 5\nI0821 23:20:26.026500 1370 log.go:172] (0xc0005dfc20) (5) Data frame handling\nI0821 23:20:26.030203 1370 log.go:172] (0xc000105290) Data frame received for 1\nI0821 23:20:26.030221 1370 log.go:172] (0xc0005dfa40) (1) Data frame handling\nI0821 23:20:26.030237 1370 log.go:172] (0xc0005dfa40) (1) Data frame sent\nI0821 23:20:26.030316 1370 log.go:172] (0xc000105290) (0xc0005dfa40) Stream removed, broadcasting: 1\nI0821 23:20:26.030333 1370 log.go:172] (0xc000105290) Go away received\nI0821 23:20:26.030575 1370 log.go:172] (0xc000105290) (0xc0005dfa40) Stream removed, broadcasting: 1\nI0821 23:20:26.030589 1370 log.go:172] (0xc000105290) (0xc0007aa000) Stream removed, broadcasting: 3\nI0821 23:20:26.030594 1370 log.go:172] (0xc000105290) (0xc0005dfc20) Stream removed, broadcasting: 5\n" Aug 21 23:20:26.034: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:20:26.034: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 23:20:26.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7192 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 21 23:20:26.730: INFO: stderr: "I0821 23:20:26.203169 1391 log.go:172] (0xc000b3c420) (0xc000bac0a0) Create stream\nI0821 23:20:26.203241 1391 log.go:172] (0xc000b3c420) (0xc000bac0a0) Stream added, broadcasting: 1\nI0821 23:20:26.207341 1391 log.go:172] (0xc000b3c420) Reply frame received for 1\nI0821 23:20:26.207382 1391 log.go:172] (0xc000b3c420) (0xc0005da640) Create stream\nI0821 23:20:26.207400 1391 log.go:172] (0xc000b3c420) (0xc0005da640) Stream added, broadcasting: 3\nI0821 23:20:26.207978 1391 log.go:172] (0xc000b3c420) Reply frame received for 3\nI0821 23:20:26.207992 1391 log.go:172] (0xc000b3c420) (0xc00039f4a0) Create stream\nI0821 23:20:26.207997 1391 log.go:172] (0xc000b3c420) (0xc00039f4a0) Stream added, broadcasting: 5\nI0821 23:20:26.208564 1391 log.go:172] (0xc000b3c420) Reply frame received for 5\nI0821 23:20:26.256579 1391 log.go:172] (0xc000b3c420) Data frame received for 5\nI0821 23:20:26.256602 1391 log.go:172] (0xc00039f4a0) (5) Data frame handling\nI0821 23:20:26.256618 1391 log.go:172] (0xc00039f4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 23:20:26.722409 1391 log.go:172] (0xc000b3c420) Data frame received for 3\nI0821 23:20:26.722427 1391 log.go:172] (0xc0005da640) (3) Data frame handling\nI0821 23:20:26.722442 1391 log.go:172] (0xc0005da640) (3) Data frame sent\nI0821 23:20:26.722449 1391 log.go:172] (0xc000b3c420) Data frame received for 3\nI0821 23:20:26.722454 1391 log.go:172] (0xc0005da640) (3) Data frame handling\nI0821 23:20:26.722487 1391 log.go:172] (0xc000b3c420) Data frame received for 5\nI0821 23:20:26.722505 1391 log.go:172] (0xc00039f4a0) (5) Data frame handling\nI0821 23:20:26.723680 1391 log.go:172] (0xc000b3c420) Data frame received for 1\nI0821 23:20:26.723692 1391 log.go:172] (0xc000bac0a0) (1) Data frame handling\nI0821 23:20:26.723704 1391 log.go:172] (0xc000bac0a0) (1) Data frame sent\nI0821 23:20:26.723718 1391 log.go:172] (0xc000b3c420) (0xc000bac0a0) Stream removed, broadcasting: 1\nI0821 23:20:26.723729 1391 log.go:172] (0xc000b3c420) Go away received\nI0821 23:20:26.724035 1391 log.go:172] (0xc000b3c420) (0xc000bac0a0) Stream removed, broadcasting: 1\nI0821 23:20:26.724049 1391 log.go:172] (0xc000b3c420) (0xc0005da640) Stream removed, broadcasting: 3\nI0821 23:20:26.724056 1391 log.go:172] (0xc000b3c420) (0xc00039f4a0) Stream removed, broadcasting: 5\n" Aug 21 23:20:26.730: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 21 23:20:26.730: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 21 23:20:26.730: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 23:20:26.741: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 21 23:20:36.775: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 23:20:36.775: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 21 23:20:36.775: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 21 23:20:36.786: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999579s Aug 21 23:20:37.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996130344s Aug 21 23:20:38.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991975807s Aug 21 23:20:39.901: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.899347812s Aug 21 23:20:40.905: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.881537455s Aug 21 23:20:41.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.876763328s Aug 21 23:20:42.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.869339552s Aug 21 23:20:44.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.803329021s Aug 21 23:20:45.121: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.66538983s Aug 21 23:20:46.127: INFO: Verifying statefulset ss doesn't scale past 3 for another 660.838364ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7192 Aug 21 23:20:47.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7192 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:20:47.374: INFO: stderr: "I0821 23:20:47.260998 1411 log.go:172] (0xc000abc840) (0xc000a9a6e0) Create stream\nI0821 23:20:47.261067 1411 log.go:172] (0xc000abc840) (0xc000a9a6e0) Stream added, broadcasting: 1\nI0821 23:20:47.269705 1411 log.go:172] (0xc000abc840) Reply frame received for 1\nI0821 23:20:47.269841 1411 log.go:172] (0xc000abc840) (0xc000668640) Create stream\nI0821 23:20:47.269904 1411 log.go:172] (0xc000abc840) (0xc000668640) Stream added, broadcasting: 3\nI0821 23:20:47.273362 1411 log.go:172] (0xc000abc840) Reply frame received for 3\nI0821 23:20:47.273402 1411 log.go:172] (0xc000abc840) (0xc000315400) Create stream\nI0821 23:20:47.273417 1411 log.go:172] (0xc000abc840) (0xc000315400) Stream added, broadcasting: 5\nI0821 23:20:47.274833 1411 log.go:172] (0xc000abc840) Reply frame received for 5\nI0821 23:20:47.363116 1411 log.go:172] (0xc000abc840) Data frame received for 5\nI0821 23:20:47.363159 1411 log.go:172] (0xc000315400) (5) Data frame handling\nI0821 23:20:47.363175 1411 log.go:172] (0xc000315400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 23:20:47.363232 1411 log.go:172] (0xc000abc840) Data frame received for 3\nI0821 23:20:47.363255 1411 log.go:172] (0xc000668640) (3) Data frame handling\nI0821 23:20:47.363275 1411 log.go:172] (0xc000668640) (3) Data frame sent\nI0821 23:20:47.363303 1411 log.go:172] (0xc000abc840) Data frame received for 3\nI0821 23:20:47.363319 1411 log.go:172] (0xc000668640) (3) Data frame handling\nI0821 23:20:47.363439 1411 log.go:172] (0xc000abc840) Data frame received for 5\nI0821 23:20:47.363468 1411 log.go:172] (0xc000315400) (5) Data frame handling\nI0821 23:20:47.365269 1411 log.go:172] (0xc000abc840) Data frame received for 1\nI0821 23:20:47.365295 1411 log.go:172] (0xc000a9a6e0) (1) Data frame handling\nI0821 23:20:47.365313 1411 log.go:172] (0xc000a9a6e0) (1) Data frame sent\nI0821 23:20:47.365327 1411 log.go:172] (0xc000abc840) (0xc000a9a6e0) Stream removed, broadcasting: 1\nI0821 23:20:47.365345 1411 log.go:172] (0xc000abc840) Go away received\nI0821 23:20:47.365738 1411 log.go:172] (0xc000abc840) (0xc000a9a6e0) Stream removed, broadcasting: 1\nI0821 23:20:47.365763 1411 log.go:172] (0xc000abc840) (0xc000668640) Stream removed, broadcasting: 3\nI0821 23:20:47.365776 1411 log.go:172] (0xc000abc840) (0xc000315400) Stream removed, broadcasting: 5\n" Aug 21 23:20:47.374: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 23:20:47.374: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 23:20:47.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7192 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:20:47.585: INFO: stderr: "I0821 23:20:47.500155 1434 log.go:172] (0xc000b366e0) (0xc0009381e0) Create stream\nI0821 23:20:47.500218 1434 log.go:172] (0xc000b366e0) (0xc0009381e0) Stream added, broadcasting: 1\nI0821 23:20:47.502100 1434 log.go:172] (0xc000b366e0) Reply frame received for 1\nI0821 23:20:47.502154 1434 log.go:172] (0xc000b366e0) (0xc00068c640) Create stream\nI0821 23:20:47.502169 1434 log.go:172] (0xc000b366e0) (0xc00068c640) Stream added, broadcasting: 3\nI0821 23:20:47.503165 1434 log.go:172] (0xc000b366e0) Reply frame received for 3\nI0821 23:20:47.503198 1434 log.go:172] (0xc000b366e0) (0xc000527400) Create stream\nI0821 23:20:47.503209 1434 log.go:172] (0xc000b366e0) (0xc000527400) Stream added, broadcasting: 5\nI0821 23:20:47.504052 1434 log.go:172] (0xc000b366e0) Reply frame received for 5\nI0821 23:20:47.575917 1434 log.go:172] (0xc000b366e0) Data frame received for 5\nI0821 23:20:47.575942 1434 log.go:172] (0xc000527400) (5) Data frame handling\nI0821 23:20:47.575951 1434 log.go:172] (0xc000527400) (5) Data frame sent\nI0821 23:20:47.575957 1434 log.go:172] (0xc000b366e0) Data frame received for 5\nI0821 23:20:47.575961 1434 log.go:172] (0xc000527400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 23:20:47.575979 1434 log.go:172] (0xc000b366e0) Data frame received for 3\nI0821 23:20:47.575986 1434 log.go:172] (0xc00068c640) (3) Data frame handling\nI0821 23:20:47.575993 1434 log.go:172] (0xc00068c640) (3) Data frame sent\nI0821 23:20:47.575998 1434 log.go:172] (0xc000b366e0) Data frame received for 3\nI0821 23:20:47.576003 1434 log.go:172] (0xc00068c640) (3) Data frame handling\nI0821 23:20:47.577256 1434 log.go:172] (0xc000b366e0) Data frame received for 1\nI0821 23:20:47.577280 1434 log.go:172] (0xc0009381e0) (1) Data frame handling\nI0821 23:20:47.577299 1434 log.go:172] (0xc0009381e0) (1) Data frame sent\nI0821 23:20:47.577315 1434 log.go:172] (0xc000b366e0) (0xc0009381e0) Stream removed, broadcasting: 1\nI0821 23:20:47.577435 1434 log.go:172] (0xc000b366e0) Go away received\nI0821 23:20:47.577734 1434 log.go:172] (0xc000b366e0) (0xc0009381e0) Stream removed, broadcasting: 1\nI0821 23:20:47.577748 1434 log.go:172] (0xc000b366e0) (0xc00068c640) Stream removed, broadcasting: 3\nI0821 23:20:47.577755 1434 log.go:172] (0xc000b366e0) (0xc000527400) Stream removed, broadcasting: 5\n" Aug 21 23:20:47.585: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 23:20:47.585: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 23:20:47.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7192 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 21 23:20:47.801: INFO: stderr: "I0821 23:20:47.719193 1455 log.go:172] (0xc0000f6420) (0xc0005d0000) Create stream\nI0821 23:20:47.719240 1455 log.go:172] (0xc0000f6420) (0xc0005d0000) Stream added, broadcasting: 1\nI0821 23:20:47.721554 1455 log.go:172] (0xc0000f6420) Reply frame received for 1\nI0821 23:20:47.721597 1455 log.go:172] (0xc0000f6420) (0xc0009c8000) Create stream\nI0821 23:20:47.721608 1455 log.go:172] (0xc0000f6420) (0xc0009c8000) Stream added, broadcasting: 3\nI0821 23:20:47.722293 1455 log.go:172] (0xc0000f6420) Reply frame received for 3\nI0821 23:20:47.722323 1455 log.go:172] (0xc0000f6420) (0xc000615a40) Create stream\nI0821 23:20:47.722332 1455 log.go:172] (0xc0000f6420) (0xc000615a40) Stream added, broadcasting: 5\nI0821 23:20:47.723029 1455 log.go:172] (0xc0000f6420) Reply frame received for 5\nI0821 23:20:47.792436 1455 log.go:172] (0xc0000f6420) Data frame received for 3\nI0821 23:20:47.792493 1455 log.go:172] (0xc0000f6420) Data frame received for 5\nI0821 23:20:47.792609 1455 log.go:172] (0xc000615a40) (5) Data frame handling\nI0821 23:20:47.792649 1455 log.go:172] (0xc000615a40) (5) Data frame sent\nI0821 23:20:47.792674 1455 log.go:172] (0xc0000f6420) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 23:20:47.792710 1455 log.go:172] (0xc000615a40) (5) Data frame handling\nI0821 23:20:47.792872 1455 log.go:172] (0xc0009c8000) (3) Data frame handling\nI0821 23:20:47.792893 1455 log.go:172] (0xc0009c8000) (3) Data frame sent\nI0821 23:20:47.792905 1455 log.go:172] (0xc0000f6420) Data frame received for 3\nI0821 23:20:47.792927 1455 log.go:172] (0xc0009c8000) (3) Data frame handling\nI0821 23:20:47.794088 1455 log.go:172] (0xc0000f6420) Data frame received for 1\nI0821 23:20:47.794109 1455 log.go:172] (0xc0005d0000) (1) Data frame handling\nI0821 23:20:47.794121 1455 log.go:172] (0xc0005d0000) (1) Data frame sent\nI0821 23:20:47.794132 1455 log.go:172] (0xc0000f6420) (0xc0005d0000) Stream removed, broadcasting: 1\nI0821 23:20:47.794148 1455 log.go:172] (0xc0000f6420) Go away received\nI0821 23:20:47.794497 1455 log.go:172] (0xc0000f6420) (0xc0005d0000) Stream removed, broadcasting: 1\nI0821 23:20:47.794523 1455 log.go:172] (0xc0000f6420) (0xc0009c8000) Stream removed, broadcasting: 3\nI0821 23:20:47.794536 1455 log.go:172] (0xc0000f6420) (0xc000615a40) Stream removed, broadcasting: 5\n" Aug 21 23:20:47.801: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 21 23:20:47.801: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 21 23:20:47.801: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 21 23:21:17.871: INFO: Deleting all statefulset in ns statefulset-7192 Aug 21 23:21:17.874: INFO: Scaling statefulset ss to 0 Aug 21 23:21:17.882: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 23:21:17.883: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:21:17.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7192" for this suite. • [SLOW TEST:113.460 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":27,"skipped":331,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:21:17.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-5afb1f4b-7185-4936-932a-610c1c1a6c33 STEP: Creating a pod to test consume configMaps Aug 21 23:21:18.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-189df357-ac05-4f22-95c0-b394d2e15f68" in namespace "configmap-5006" to be "success or failure" Aug 21 23:21:18.015: INFO: Pod "pod-configmaps-189df357-ac05-4f22-95c0-b394d2e15f68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.462804ms Aug 21 23:21:20.134: INFO: Pod "pod-configmaps-189df357-ac05-4f22-95c0-b394d2e15f68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123195814s Aug 21 23:21:22.138: INFO: Pod "pod-configmaps-189df357-ac05-4f22-95c0-b394d2e15f68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126978534s STEP: Saw pod success Aug 21 23:21:22.138: INFO: Pod "pod-configmaps-189df357-ac05-4f22-95c0-b394d2e15f68" satisfied condition "success or failure" Aug 21 23:21:22.140: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-189df357-ac05-4f22-95c0-b394d2e15f68 container configmap-volume-test: STEP: delete the pod Aug 21 23:21:22.232: INFO: Waiting for pod pod-configmaps-189df357-ac05-4f22-95c0-b394d2e15f68 to disappear Aug 21 23:21:22.242: INFO: Pod pod-configmaps-189df357-ac05-4f22-95c0-b394d2e15f68 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:21:22.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5006" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":350,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:21:22.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 23:21:23.249: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 23:21:25.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648883, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648883, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648883, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648883, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 23:21:27.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648883, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648883, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648883, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733648883, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 23:21:30.565: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:21:30.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3620" for this suite. STEP: Destroying namespace "webhook-3620-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.561 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":29,"skipped":355,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:21:30.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:21:35.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7933" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":30,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:21:35.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-bcb08527-9917-4f01-bf11-d3a60acf63ee STEP: Creating configMap with name cm-test-opt-upd-a3bc64e0-00e3-457e-b431-c303435afff4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-bcb08527-9917-4f01-bf11-d3a60acf63ee STEP: Updating configmap cm-test-opt-upd-a3bc64e0-00e3-457e-b431-c303435afff4 STEP: Creating configMap with name cm-test-opt-create-7ae395cf-d1fd-4d6e-8c89-6dc9c2f5a277 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:21:45.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6422" for this suite. • [SLOW TEST:10.243 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":417,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:21:45.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:21:45.740: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 21 23:21:47.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9013 create -f -' Aug 21 23:21:53.122: INFO: stderr: "" Aug 21 23:21:53.122: INFO: stdout: "e2e-test-crd-publish-openapi-7618-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 21 23:21:53.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9013 delete e2e-test-crd-publish-openapi-7618-crds test-foo' Aug 21 23:21:53.263: INFO: stderr: "" Aug 21 23:21:53.263: INFO: stdout: "e2e-test-crd-publish-openapi-7618-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 21 23:21:53.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9013 apply -f -' Aug 21 23:21:53.985: INFO: stderr: "" Aug 21 23:21:53.985: INFO: stdout: "e2e-test-crd-publish-openapi-7618-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 21 23:21:53.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9013 delete e2e-test-crd-publish-openapi-7618-crds test-foo' Aug 21 23:21:54.578: INFO: stderr: "" Aug 21 23:21:54.578: INFO: stdout: "e2e-test-crd-publish-openapi-7618-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 21 23:21:54.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9013 create -f -' Aug 21 23:21:54.968: INFO: rc: 1 Aug 21 23:21:54.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9013 apply -f -' Aug 21 23:21:55.563: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 21 23:21:55.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9013 create -f -' Aug 21 23:21:55.800: INFO: rc: 1 Aug 21 23:21:55.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9013 apply -f -' Aug 21 23:21:56.062: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 21 23:21:56.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7618-crds' Aug 21 23:21:56.294: INFO: stderr: "" Aug 21 23:21:56.294: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7618-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 21 23:21:56.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7618-crds.metadata' Aug 21 23:21:56.548: INFO: stderr: "" Aug 21 23:21:56.548: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7618-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 21 23:21:56.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7618-crds.spec' Aug 21 23:21:56.815: INFO: stderr: "" Aug 21 23:21:56.815: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7618-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 21 23:21:56.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7618-crds.spec.bars' Aug 21 23:21:57.088: INFO: stderr: "" Aug 21 23:21:57.088: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7618-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 21 23:21:57.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7618-crds.spec.bars2' Aug 21 23:21:57.368: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:21:59.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9013" for this suite. • [SLOW TEST:13.668 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":32,"skipped":423,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:21:59.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-277ceaa8-559a-4449-ba21-a2d8967403f7 STEP: Creating a pod to test consume configMaps Aug 21 23:21:59.325: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6ff10243-fe18-47c2-928c-9b99be633d31" in namespace "projected-8411" to be "success or failure" Aug 21 23:21:59.328: INFO: Pod "pod-projected-configmaps-6ff10243-fe18-47c2-928c-9b99be633d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.623876ms Aug 21 23:22:01.331: INFO: Pod "pod-projected-configmaps-6ff10243-fe18-47c2-928c-9b99be633d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006162117s Aug 21 23:22:03.348: INFO: Pod "pod-projected-configmaps-6ff10243-fe18-47c2-928c-9b99be633d31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023009664s Aug 21 23:22:05.500: INFO: Pod "pod-projected-configmaps-6ff10243-fe18-47c2-928c-9b99be633d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.174850843s STEP: Saw pod success Aug 21 23:22:05.500: INFO: Pod "pod-projected-configmaps-6ff10243-fe18-47c2-928c-9b99be633d31" satisfied condition "success or failure" Aug 21 23:22:05.503: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6ff10243-fe18-47c2-928c-9b99be633d31 container projected-configmap-volume-test: STEP: delete the pod Aug 21 23:22:05.922: INFO: Waiting for pod pod-projected-configmaps-6ff10243-fe18-47c2-928c-9b99be633d31 to disappear Aug 21 23:22:05.975: INFO: Pod pod-projected-configmaps-6ff10243-fe18-47c2-928c-9b99be633d31 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:22:05.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8411" for this suite. • [SLOW TEST:6.734 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":433,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:22:05.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 21 23:22:07.100: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:22:07.102: INFO: Number of nodes with available pods: 0 Aug 21 23:22:07.102: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:22:08.106: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:22:08.108: INFO: Number of nodes with available pods: 0 Aug 21 23:22:08.108: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:22:09.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:22:09.166: INFO: Number of nodes with available pods: 0 Aug 21 23:22:09.166: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:22:10.107: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:22:10.110: INFO: Number of nodes with available pods: 0 Aug 21 23:22:10.110: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:22:11.244: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:22:11.663: INFO: Number of nodes with available pods: 0 Aug 21 23:22:11.663: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:22:12.489: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:22:12.496: INFO: Number of nodes with available pods: 0 Aug 21 23:22:12.496: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:22:13.107: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:22:13.110: INFO: Number of nodes with available pods: 1 Aug 21 23:22:13.110: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:22:14.107: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:22:14.110: INFO: Number of nodes with available pods: 2 Aug 21 23:22:14.110: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 21 23:22:14.131: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:22:14.149: INFO: Number of nodes with available pods: 2 Aug 21 23:22:14.149: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7701, will wait for the garbage collector to delete the pods Aug 21 23:22:15.436: INFO: Deleting DaemonSet.extensions daemon-set took: 5.095194ms Aug 21 23:22:17.637: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.20028221s Aug 21 23:22:31.640: INFO: Number of nodes with available pods: 0 Aug 21 23:22:31.640: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 23:22:31.643: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7701/daemonsets","resourceVersion":"2278861"},"items":null} Aug 21 23:22:31.647: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7701/pods","resourceVersion":"2278861"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:22:31.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7701" for this suite. • [SLOW TEST:25.697 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":34,"skipped":438,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:22:31.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-ftt7 STEP: Creating a pod to test atomic-volume-subpath Aug 21 23:22:31.761: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ftt7" in namespace "subpath-3981" to be "success or failure" Aug 21 23:22:31.777: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.881935ms Aug 21 23:22:33.780: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0194906s Aug 21 23:22:35.784: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 4.023155728s Aug 21 23:22:37.788: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 6.027235481s Aug 21 23:22:39.792: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 8.031330281s Aug 21 23:22:41.796: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 10.035574545s Aug 21 23:22:43.800: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 12.039451349s Aug 21 23:22:45.805: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 14.043914191s Aug 21 23:22:47.809: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 16.048166508s Aug 21 23:22:49.813: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 18.052207353s Aug 21 23:22:51.817: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 20.056264095s Aug 21 23:22:53.821: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Running", Reason="", readiness=true. Elapsed: 22.060252091s Aug 21 23:22:55.824: INFO: Pod "pod-subpath-test-downwardapi-ftt7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063776588s STEP: Saw pod success Aug 21 23:22:55.824: INFO: Pod "pod-subpath-test-downwardapi-ftt7" satisfied condition "success or failure" Aug 21 23:22:55.828: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-ftt7 container test-container-subpath-downwardapi-ftt7: STEP: delete the pod Aug 21 23:22:55.847: INFO: Waiting for pod pod-subpath-test-downwardapi-ftt7 to disappear Aug 21 23:22:55.850: INFO: Pod pod-subpath-test-downwardapi-ftt7 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ftt7 Aug 21 23:22:55.850: INFO: Deleting pod "pod-subpath-test-downwardapi-ftt7" in namespace "subpath-3981" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:22:55.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3981" for this suite. • [SLOW TEST:24.201 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":35,"skipped":548,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:22:55.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 21 23:23:01.049: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:23:01.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5182" for this suite. • [SLOW TEST:5.412 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":36,"skipped":562,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:23:01.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-td8j STEP: Creating a pod to test atomic-volume-subpath Aug 21 23:23:01.494: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-td8j" in namespace "subpath-8360" to be "success or failure" Aug 21 23:23:01.504: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Pending", Reason="", readiness=false. Elapsed: 9.48739ms Aug 21 23:23:03.507: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012889847s Aug 21 23:23:05.509: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015427417s Aug 21 23:23:07.527: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 6.032725545s Aug 21 23:23:09.530: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 8.035607695s Aug 21 23:23:11.534: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 10.039909307s Aug 21 23:23:13.538: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 12.044370396s Aug 21 23:23:15.542: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 14.048300984s Aug 21 23:23:17.545: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 16.05121717s Aug 21 23:23:19.549: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 18.054996124s Aug 21 23:23:21.553: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 20.058723522s Aug 21 23:23:23.557: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 22.062865117s Aug 21 23:23:25.609: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Running", Reason="", readiness=true. Elapsed: 24.114546224s Aug 21 23:23:27.815: INFO: Pod "pod-subpath-test-secret-td8j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.32047031s STEP: Saw pod success Aug 21 23:23:27.815: INFO: Pod "pod-subpath-test-secret-td8j" satisfied condition "success or failure" Aug 21 23:23:27.818: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-td8j container test-container-subpath-secret-td8j: STEP: delete the pod Aug 21 23:23:28.209: INFO: Waiting for pod pod-subpath-test-secret-td8j to disappear Aug 21 23:23:28.240: INFO: Pod pod-subpath-test-secret-td8j no longer exists STEP: Deleting pod pod-subpath-test-secret-td8j Aug 21 23:23:28.240: INFO: Deleting pod "pod-subpath-test-secret-td8j" in namespace "subpath-8360" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:23:28.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8360" for this suite. • [SLOW TEST:26.950 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":37,"skipped":577,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:23:28.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-c6d3e435-6047-4d35-a21e-7b5d1d003c8b STEP: Creating a pod to test consume secrets Aug 21 23:23:28.471: INFO: Waiting up to 5m0s for pod "pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c" in namespace "secrets-1046" to be "success or failure" Aug 21 23:23:28.486: INFO: Pod "pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.986775ms Aug 21 23:23:30.491: INFO: Pod "pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019845067s Aug 21 23:23:32.647: INFO: Pod "pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175642689s Aug 21 23:23:34.651: INFO: Pod "pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c": Phase="Running", Reason="", readiness=true. Elapsed: 6.179932474s Aug 21 23:23:36.805: INFO: Pod "pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c": Phase="Running", Reason="", readiness=true. Elapsed: 8.333507576s Aug 21 23:23:38.926: INFO: Pod "pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.454616168s STEP: Saw pod success Aug 21 23:23:38.926: INFO: Pod "pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c" satisfied condition "success or failure" Aug 21 23:23:38.928: INFO: Trying to get logs from node jerma-worker pod pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c container secret-volume-test: STEP: delete the pod Aug 21 23:23:38.992: INFO: Waiting for pod pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c to disappear Aug 21 23:23:39.093: INFO: Pod pod-secrets-605ba2a7-8c6b-4f7c-99f2-349957e9ae4c no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:23:39.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1046" for this suite. • [SLOW TEST:10.854 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":583,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:23:39.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 21 23:23:46.493: INFO: 10 pods remaining Aug 21 23:23:46.493: INFO: 10 pods has nil DeletionTimestamp Aug 21 23:23:46.493: INFO: Aug 21 23:23:48.490: INFO: 0 pods remaining Aug 21 23:23:48.490: INFO: 0 pods has nil DeletionTimestamp Aug 21 23:23:48.490: INFO: Aug 21 23:23:49.971: INFO: 0 pods remaining Aug 21 23:23:49.971: INFO: 0 pods has nil DeletionTimestamp Aug 21 23:23:49.971: INFO: Aug 21 23:23:51.223: INFO: 0 pods remaining Aug 21 23:23:51.223: INFO: 0 pods has nil DeletionTimestamp Aug 21 23:23:51.223: INFO: STEP: Gathering metrics W0821 23:23:53.651527 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 23:23:53.651: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:23:53.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-299" for this suite. • [SLOW TEST:15.224 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":39,"skipped":589,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:23:54.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Aug 21 23:23:56.481: INFO: Waiting up to 5m0s for pod "var-expansion-8cc213bf-ceab-4b4d-b2a2-b8764cbb36f3" in namespace "var-expansion-9083" to be "success or failure" Aug 21 23:23:56.858: INFO: Pod "var-expansion-8cc213bf-ceab-4b4d-b2a2-b8764cbb36f3": Phase="Pending", Reason="", readiness=false. Elapsed: 376.677942ms Aug 21 23:23:58.878: INFO: Pod "var-expansion-8cc213bf-ceab-4b4d-b2a2-b8764cbb36f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396810008s Aug 21 23:24:00.884: INFO: Pod "var-expansion-8cc213bf-ceab-4b4d-b2a2-b8764cbb36f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402863049s Aug 21 23:24:03.070: INFO: Pod "var-expansion-8cc213bf-ceab-4b4d-b2a2-b8764cbb36f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.589315484s STEP: Saw pod success Aug 21 23:24:03.071: INFO: Pod "var-expansion-8cc213bf-ceab-4b4d-b2a2-b8764cbb36f3" satisfied condition "success or failure" Aug 21 23:24:03.074: INFO: Trying to get logs from node jerma-worker pod var-expansion-8cc213bf-ceab-4b4d-b2a2-b8764cbb36f3 container dapi-container: STEP: delete the pod Aug 21 23:24:03.394: INFO: Waiting for pod var-expansion-8cc213bf-ceab-4b4d-b2a2-b8764cbb36f3 to disappear Aug 21 23:24:03.493: INFO: Pod var-expansion-8cc213bf-ceab-4b4d-b2a2-b8764cbb36f3 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:24:03.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9083" for this suite. • [SLOW TEST:9.265 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":590,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:24:03.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2935.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2935.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2935.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2935.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2935.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2935.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 23:24:18.138: INFO: DNS probes using dns-2935/dns-test-4e65b96e-ba5f-4470-af12-dbe89a33972b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:24:18.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2935" for this suite. • [SLOW TEST:15.112 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":41,"skipped":602,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:24:18.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 21 23:24:19.635: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:24:31.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1338" for this suite. • [SLOW TEST:13.212 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":42,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:24:31.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 21 23:24:32.108: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 21 23:24:43.424: INFO: >>> kubeConfig: /root/.kube/config Aug 21 23:24:46.325: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:24:56.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-169" for this suite. • [SLOW TEST:24.872 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":43,"skipped":653,"failed":0} [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:24:56.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Aug 21 23:24:57.007: INFO: Waiting up to 5m0s for pod "client-containers-c36d348f-61fa-487b-a2b9-b41ef027a9cb" in namespace "containers-3550" to be "success or failure" Aug 21 23:24:57.017: INFO: Pod "client-containers-c36d348f-61fa-487b-a2b9-b41ef027a9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.146574ms Aug 21 23:24:59.022: INFO: Pod "client-containers-c36d348f-61fa-487b-a2b9-b41ef027a9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01488221s Aug 21 23:25:01.025: INFO: Pod "client-containers-c36d348f-61fa-487b-a2b9-b41ef027a9cb": Phase="Running", Reason="", readiness=true. Elapsed: 4.018171198s Aug 21 23:25:03.029: INFO: Pod "client-containers-c36d348f-61fa-487b-a2b9-b41ef027a9cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022108343s STEP: Saw pod success Aug 21 23:25:03.029: INFO: Pod "client-containers-c36d348f-61fa-487b-a2b9-b41ef027a9cb" satisfied condition "success or failure" Aug 21 23:25:03.032: INFO: Trying to get logs from node jerma-worker pod client-containers-c36d348f-61fa-487b-a2b9-b41ef027a9cb container test-container: STEP: delete the pod Aug 21 23:25:03.090: INFO: Waiting for pod client-containers-c36d348f-61fa-487b-a2b9-b41ef027a9cb to disappear Aug 21 23:25:03.098: INFO: Pod client-containers-c36d348f-61fa-487b-a2b9-b41ef027a9cb no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:25:03.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3550" for this suite. • [SLOW TEST:6.312 seconds] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:25:03.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 21 23:25:03.843: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 21 23:25:05.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649103, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649103, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649104, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649103, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 23:25:08.884: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:25:08.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:25:10.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4565" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.195 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":45,"skipped":748,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:25:10.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[] Aug 21 23:25:10.413: INFO: Get endpoints failed (12.328366ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Aug 21 23:25:11.443: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[] (1.042696331s elapsed) STEP: Creating pod pod1 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[pod1:[80]] Aug 21 23:25:15.582: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[pod1:[80]] (4.133081312s elapsed) STEP: Creating pod pod2 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[pod1:[80] pod2:[80]] Aug 21 23:25:19.036: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[pod1:[80] pod2:[80]] (3.450018874s elapsed) STEP: Deleting pod pod1 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[pod2:[80]] Aug 21 23:25:20.097: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[pod2:[80]] (1.057176619s elapsed) STEP: Deleting pod pod2 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[] Aug 21 23:25:20.138: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[] (36.008186ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:25:20.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8787" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.937 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":46,"skipped":761,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:25:20.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:25:24.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4264" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:25:24.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7503 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7503 STEP: creating replication controller externalsvc in namespace services-7503 I0821 23:25:24.670564 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7503, replica count: 2 I0821 23:25:27.721066 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 23:25:30.721340 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 21 23:25:30.773: INFO: Creating new exec pod Aug 21 23:25:34.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7503 execpodhzhnn -- /bin/sh -x -c nslookup nodeport-service' Aug 21 23:25:35.021: INFO: stderr: "I0821 23:25:34.925829 1758 log.go:172] (0xc000929600) (0xc000914820) Create stream\nI0821 23:25:34.925905 1758 log.go:172] (0xc000929600) (0xc000914820) Stream added, broadcasting: 1\nI0821 23:25:34.936178 1758 log.go:172] (0xc000929600) Reply frame received for 1\nI0821 23:25:34.936221 1758 log.go:172] (0xc000929600) (0xc000527360) Create stream\nI0821 23:25:34.936231 1758 log.go:172] (0xc000929600) (0xc000527360) Stream added, broadcasting: 3\nI0821 23:25:34.937289 1758 log.go:172] (0xc000929600) Reply frame received for 3\nI0821 23:25:34.937315 1758 log.go:172] (0xc000929600) (0xc000914000) Create stream\nI0821 23:25:34.937324 1758 log.go:172] (0xc000929600) (0xc000914000) Stream added, broadcasting: 5\nI0821 23:25:34.938683 1758 log.go:172] (0xc000929600) Reply frame received for 5\nI0821 23:25:35.006960 1758 log.go:172] (0xc000929600) Data frame received for 5\nI0821 23:25:35.006998 1758 log.go:172] (0xc000914000) (5) Data frame handling\nI0821 23:25:35.007021 1758 log.go:172] (0xc000914000) (5) Data frame sent\n+ nslookup nodeport-service\nI0821 23:25:35.013767 1758 log.go:172] (0xc000929600) Data frame received for 3\nI0821 23:25:35.013794 1758 log.go:172] (0xc000527360) (3) Data frame handling\nI0821 23:25:35.013810 1758 log.go:172] (0xc000527360) (3) Data frame sent\nI0821 23:25:35.014679 1758 log.go:172] (0xc000929600) Data frame received for 3\nI0821 23:25:35.014705 1758 log.go:172] (0xc000527360) (3) Data frame handling\nI0821 23:25:35.014733 1758 log.go:172] (0xc000527360) (3) Data frame sent\nI0821 23:25:35.015119 1758 log.go:172] (0xc000929600) Data frame received for 3\nI0821 23:25:35.015148 1758 log.go:172] (0xc000527360) (3) Data frame handling\nI0821 23:25:35.015163 1758 log.go:172] (0xc000929600) Data frame received for 5\nI0821 23:25:35.015168 1758 log.go:172] (0xc000914000) (5) Data frame handling\nI0821 23:25:35.017212 1758 log.go:172] (0xc000929600) Data frame received for 1\nI0821 23:25:35.017239 1758 log.go:172] (0xc000914820) (1) Data frame handling\nI0821 23:25:35.017255 1758 log.go:172] (0xc000914820) (1) Data frame sent\nI0821 23:25:35.017272 1758 log.go:172] (0xc000929600) (0xc000914820) Stream removed, broadcasting: 1\nI0821 23:25:35.017300 1758 log.go:172] (0xc000929600) Go away received\nI0821 23:25:35.017889 1758 log.go:172] (0xc000929600) (0xc000914820) Stream removed, broadcasting: 1\nI0821 23:25:35.017914 1758 log.go:172] (0xc000929600) (0xc000527360) Stream removed, broadcasting: 3\nI0821 23:25:35.017927 1758 log.go:172] (0xc000929600) (0xc000914000) Stream removed, broadcasting: 5\n" Aug 21 23:25:35.022: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7503.svc.cluster.local\tcanonical name = externalsvc.services-7503.svc.cluster.local.\nName:\texternalsvc.services-7503.svc.cluster.local\nAddress: 10.101.79.82\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7503, will wait for the garbage collector to delete the pods Aug 21 23:25:35.082: INFO: Deleting ReplicationController externalsvc took: 7.17143ms Aug 21 23:25:35.182: INFO: Terminating ReplicationController externalsvc pods took: 100.247041ms Aug 21 23:25:51.855: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:25:51.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7503" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:27.468 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":48,"skipped":818,"failed":0} SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:25:51.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 21 23:26:02.074: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.074: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.102557 6 log.go:172] (0xc002fb14a0) (0xc0029d0460) Create stream I0821 23:26:02.102588 6 log.go:172] (0xc002fb14a0) (0xc0029d0460) Stream added, broadcasting: 1 I0821 23:26:02.105240 6 log.go:172] (0xc002fb14a0) Reply frame received for 1 I0821 23:26:02.105272 6 log.go:172] (0xc002fb14a0) (0xc002fe1400) Create stream I0821 23:26:02.105281 6 log.go:172] (0xc002fb14a0) (0xc002fe1400) Stream added, broadcasting: 3 I0821 23:26:02.106161 6 log.go:172] (0xc002fb14a0) Reply frame received for 3 I0821 23:26:02.106202 6 log.go:172] (0xc002fb14a0) (0xc002954500) Create stream I0821 23:26:02.106218 6 log.go:172] (0xc002fb14a0) (0xc002954500) Stream added, broadcasting: 5 I0821 23:26:02.107090 6 log.go:172] (0xc002fb14a0) Reply frame received for 5 I0821 23:26:02.181465 6 log.go:172] (0xc002fb14a0) Data frame received for 5 I0821 23:26:02.181530 6 log.go:172] (0xc002954500) (5) Data frame handling I0821 23:26:02.181566 6 log.go:172] (0xc002fb14a0) Data frame received for 3 I0821 23:26:02.181587 6 log.go:172] (0xc002fe1400) (3) Data frame handling I0821 23:26:02.181628 6 log.go:172] (0xc002fe1400) (3) Data frame sent I0821 23:26:02.181661 6 log.go:172] (0xc002fb14a0) Data frame received for 3 I0821 23:26:02.181676 6 log.go:172] (0xc002fe1400) (3) Data frame handling I0821 23:26:02.183700 6 log.go:172] (0xc002fb14a0) Data frame received for 1 I0821 23:26:02.183740 6 log.go:172] (0xc0029d0460) (1) Data frame handling I0821 23:26:02.183789 6 log.go:172] (0xc0029d0460) (1) Data frame sent I0821 23:26:02.183808 6 log.go:172] (0xc002fb14a0) (0xc0029d0460) Stream removed, broadcasting: 1 I0821 23:26:02.183824 6 log.go:172] (0xc002fb14a0) Go away received I0821 23:26:02.184186 6 log.go:172] (0xc002fb14a0) (0xc0029d0460) Stream removed, broadcasting: 1 I0821 23:26:02.184225 6 log.go:172] (0xc002fb14a0) (0xc002fe1400) Stream removed, broadcasting: 3 I0821 23:26:02.184240 6 log.go:172] (0xc002fb14a0) (0xc002954500) Stream removed, broadcasting: 5 Aug 21 23:26:02.184: INFO: Exec stderr: "" Aug 21 23:26:02.184: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.184: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.214532 6 log.go:172] (0xc003b2a580) (0xc002fe1680) Create stream I0821 23:26:02.214553 6 log.go:172] (0xc003b2a580) (0xc002fe1680) Stream added, broadcasting: 1 I0821 23:26:02.216898 6 log.go:172] (0xc003b2a580) Reply frame received for 1 I0821 23:26:02.216968 6 log.go:172] (0xc003b2a580) (0xc0029d0500) Create stream I0821 23:26:02.216997 6 log.go:172] (0xc003b2a580) (0xc0029d0500) Stream added, broadcasting: 3 I0821 23:26:02.217904 6 log.go:172] (0xc003b2a580) Reply frame received for 3 I0821 23:26:02.217958 6 log.go:172] (0xc003b2a580) (0xc002fe1720) Create stream I0821 23:26:02.217976 6 log.go:172] (0xc003b2a580) (0xc002fe1720) Stream added, broadcasting: 5 I0821 23:26:02.219232 6 log.go:172] (0xc003b2a580) Reply frame received for 5 I0821 23:26:02.273861 6 log.go:172] (0xc003b2a580) Data frame received for 5 I0821 23:26:02.273907 6 log.go:172] (0xc002fe1720) (5) Data frame handling I0821 23:26:02.273933 6 log.go:172] (0xc003b2a580) Data frame received for 3 I0821 23:26:02.273957 6 log.go:172] (0xc0029d0500) (3) Data frame handling I0821 23:26:02.273972 6 log.go:172] (0xc0029d0500) (3) Data frame sent I0821 23:26:02.273986 6 log.go:172] (0xc003b2a580) Data frame received for 3 I0821 23:26:02.273997 6 log.go:172] (0xc0029d0500) (3) Data frame handling I0821 23:26:02.276243 6 log.go:172] (0xc003b2a580) Data frame received for 1 I0821 23:26:02.276267 6 log.go:172] (0xc002fe1680) (1) Data frame handling I0821 23:26:02.276308 6 log.go:172] (0xc002fe1680) (1) Data frame sent I0821 23:26:02.276464 6 log.go:172] (0xc003b2a580) (0xc002fe1680) Stream removed, broadcasting: 1 I0821 23:26:02.276509 6 log.go:172] (0xc003b2a580) Go away received I0821 23:26:02.277003 6 log.go:172] (0xc003b2a580) (0xc002fe1680) Stream removed, broadcasting: 1 I0821 23:26:02.277042 6 log.go:172] (0xc003b2a580) (0xc0029d0500) Stream removed, broadcasting: 3 I0821 23:26:02.277060 6 log.go:172] (0xc003b2a580) (0xc002fe1720) Stream removed, broadcasting: 5 Aug 21 23:26:02.277: INFO: Exec stderr: "" Aug 21 23:26:02.277: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.277: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.299885 6 log.go:172] (0xc002fe9c30) (0xc002954780) Create stream I0821 23:26:02.299909 6 log.go:172] (0xc002fe9c30) (0xc002954780) Stream added, broadcasting: 1 I0821 23:26:02.301718 6 log.go:172] (0xc002fe9c30) Reply frame received for 1 I0821 23:26:02.301755 6 log.go:172] (0xc002fe9c30) (0xc002db7ea0) Create stream I0821 23:26:02.301771 6 log.go:172] (0xc002fe9c30) (0xc002db7ea0) Stream added, broadcasting: 3 I0821 23:26:02.302399 6 log.go:172] (0xc002fe9c30) Reply frame received for 3 I0821 23:26:02.302434 6 log.go:172] (0xc002fe9c30) (0xc0029d05a0) Create stream I0821 23:26:02.302444 6 log.go:172] (0xc002fe9c30) (0xc0029d05a0) Stream added, broadcasting: 5 I0821 23:26:02.303134 6 log.go:172] (0xc002fe9c30) Reply frame received for 5 I0821 23:26:02.357305 6 log.go:172] (0xc002fe9c30) Data frame received for 5 I0821 23:26:02.357360 6 log.go:172] (0xc0029d05a0) (5) Data frame handling I0821 23:26:02.357401 6 log.go:172] (0xc002fe9c30) Data frame received for 3 I0821 23:26:02.357425 6 log.go:172] (0xc002db7ea0) (3) Data frame handling I0821 23:26:02.357454 6 log.go:172] (0xc002db7ea0) (3) Data frame sent I0821 23:26:02.357468 6 log.go:172] (0xc002fe9c30) Data frame received for 3 I0821 23:26:02.357483 6 log.go:172] (0xc002db7ea0) (3) Data frame handling I0821 23:26:02.358938 6 log.go:172] (0xc002fe9c30) Data frame received for 1 I0821 23:26:02.358968 6 log.go:172] (0xc002954780) (1) Data frame handling I0821 23:26:02.358995 6 log.go:172] (0xc002954780) (1) Data frame sent I0821 23:26:02.359012 6 log.go:172] (0xc002fe9c30) (0xc002954780) Stream removed, broadcasting: 1 I0821 23:26:02.359058 6 log.go:172] (0xc002fe9c30) Go away received I0821 23:26:02.359144 6 log.go:172] (0xc002fe9c30) (0xc002954780) Stream removed, broadcasting: 1 I0821 23:26:02.359164 6 log.go:172] (0xc002fe9c30) (0xc002db7ea0) Stream removed, broadcasting: 3 I0821 23:26:02.359175 6 log.go:172] (0xc002fe9c30) (0xc0029d05a0) Stream removed, broadcasting: 5 Aug 21 23:26:02.359: INFO: Exec stderr: "" Aug 21 23:26:02.359: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.359: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.398327 6 log.go:172] (0xc002ec8370) (0xc002954a00) Create stream I0821 23:26:02.398358 6 log.go:172] (0xc002ec8370) (0xc002954a00) Stream added, broadcasting: 1 I0821 23:26:02.400857 6 log.go:172] (0xc002ec8370) Reply frame received for 1 I0821 23:26:02.400923 6 log.go:172] (0xc002ec8370) (0xc0025814a0) Create stream I0821 23:26:02.400951 6 log.go:172] (0xc002ec8370) (0xc0025814a0) Stream added, broadcasting: 3 I0821 23:26:02.402147 6 log.go:172] (0xc002ec8370) Reply frame received for 3 I0821 23:26:02.402208 6 log.go:172] (0xc002ec8370) (0xc002581540) Create stream I0821 23:26:02.402231 6 log.go:172] (0xc002ec8370) (0xc002581540) Stream added, broadcasting: 5 I0821 23:26:02.403263 6 log.go:172] (0xc002ec8370) Reply frame received for 5 I0821 23:26:02.461451 6 log.go:172] (0xc002ec8370) Data frame received for 3 I0821 23:26:02.461480 6 log.go:172] (0xc0025814a0) (3) Data frame handling I0821 23:26:02.461488 6 log.go:172] (0xc0025814a0) (3) Data frame sent I0821 23:26:02.461511 6 log.go:172] (0xc002ec8370) Data frame received for 3 I0821 23:26:02.461515 6 log.go:172] (0xc0025814a0) (3) Data frame handling I0821 23:26:02.461537 6 log.go:172] (0xc002ec8370) Data frame received for 5 I0821 23:26:02.461544 6 log.go:172] (0xc002581540) (5) Data frame handling I0821 23:26:02.462701 6 log.go:172] (0xc002ec8370) Data frame received for 1 I0821 23:26:02.462715 6 log.go:172] (0xc002954a00) (1) Data frame handling I0821 23:26:02.462727 6 log.go:172] (0xc002954a00) (1) Data frame sent I0821 23:26:02.462736 6 log.go:172] (0xc002ec8370) (0xc002954a00) Stream removed, broadcasting: 1 I0821 23:26:02.462751 6 log.go:172] (0xc002ec8370) Go away received I0821 23:26:02.462895 6 log.go:172] (0xc002ec8370) (0xc002954a00) Stream removed, broadcasting: 1 I0821 23:26:02.462933 6 log.go:172] (0xc002ec8370) (0xc0025814a0) Stream removed, broadcasting: 3 I0821 23:26:02.462954 6 log.go:172] (0xc002ec8370) (0xc002581540) Stream removed, broadcasting: 5 Aug 21 23:26:02.462: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 21 23:26:02.463: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.463: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.492194 6 log.go:172] (0xc003b2abb0) (0xc002fe1900) Create stream I0821 23:26:02.492231 6 log.go:172] (0xc003b2abb0) (0xc002fe1900) Stream added, broadcasting: 1 I0821 23:26:02.494743 6 log.go:172] (0xc003b2abb0) Reply frame received for 1 I0821 23:26:02.494815 6 log.go:172] (0xc003b2abb0) (0xc002db7f40) Create stream I0821 23:26:02.494836 6 log.go:172] (0xc003b2abb0) (0xc002db7f40) Stream added, broadcasting: 3 I0821 23:26:02.495862 6 log.go:172] (0xc003b2abb0) Reply frame received for 3 I0821 23:26:02.495894 6 log.go:172] (0xc003b2abb0) (0xc002fe19a0) Create stream I0821 23:26:02.495910 6 log.go:172] (0xc003b2abb0) (0xc002fe19a0) Stream added, broadcasting: 5 I0821 23:26:02.497150 6 log.go:172] (0xc003b2abb0) Reply frame received for 5 I0821 23:26:02.549332 6 log.go:172] (0xc003b2abb0) Data frame received for 5 I0821 23:26:02.549364 6 log.go:172] (0xc002fe19a0) (5) Data frame handling I0821 23:26:02.549389 6 log.go:172] (0xc003b2abb0) Data frame received for 3 I0821 23:26:02.549402 6 log.go:172] (0xc002db7f40) (3) Data frame handling I0821 23:26:02.549415 6 log.go:172] (0xc002db7f40) (3) Data frame sent I0821 23:26:02.549425 6 log.go:172] (0xc003b2abb0) Data frame received for 3 I0821 23:26:02.549433 6 log.go:172] (0xc002db7f40) (3) Data frame handling I0821 23:26:02.549457 6 log.go:172] (0xc003b2abb0) Data frame received for 1 I0821 23:26:02.549465 6 log.go:172] (0xc002fe1900) (1) Data frame handling I0821 23:26:02.549478 6 log.go:172] (0xc002fe1900) (1) Data frame sent I0821 23:26:02.549489 6 log.go:172] (0xc003b2abb0) (0xc002fe1900) Stream removed, broadcasting: 1 I0821 23:26:02.549500 6 log.go:172] (0xc003b2abb0) Go away received I0821 23:26:02.549681 6 log.go:172] (0xc003b2abb0) (0xc002fe1900) Stream removed, broadcasting: 1 I0821 23:26:02.549697 6 log.go:172] (0xc003b2abb0) (0xc002db7f40) Stream removed, broadcasting: 3 I0821 23:26:02.549704 6 log.go:172] (0xc003b2abb0) (0xc002fe19a0) Stream removed, broadcasting: 5 Aug 21 23:26:02.549: INFO: Exec stderr: "" Aug 21 23:26:02.549: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.549: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.577988 6 log.go:172] (0xc002fb1810) (0xc0029d0780) Create stream I0821 23:26:02.578013 6 log.go:172] (0xc002fb1810) (0xc0029d0780) Stream added, broadcasting: 1 I0821 23:26:02.580388 6 log.go:172] (0xc002fb1810) Reply frame received for 1 I0821 23:26:02.580442 6 log.go:172] (0xc002fb1810) (0xc0029d0820) Create stream I0821 23:26:02.580464 6 log.go:172] (0xc002fb1810) (0xc0029d0820) Stream added, broadcasting: 3 I0821 23:26:02.581426 6 log.go:172] (0xc002fb1810) Reply frame received for 3 I0821 23:26:02.581467 6 log.go:172] (0xc002fb1810) (0xc002552000) Create stream I0821 23:26:02.581487 6 log.go:172] (0xc002fb1810) (0xc002552000) Stream added, broadcasting: 5 I0821 23:26:02.582356 6 log.go:172] (0xc002fb1810) Reply frame received for 5 I0821 23:26:02.639103 6 log.go:172] (0xc002fb1810) Data frame received for 3 I0821 23:26:02.639130 6 log.go:172] (0xc0029d0820) (3) Data frame handling I0821 23:26:02.639138 6 log.go:172] (0xc0029d0820) (3) Data frame sent I0821 23:26:02.639143 6 log.go:172] (0xc002fb1810) Data frame received for 3 I0821 23:26:02.639148 6 log.go:172] (0xc0029d0820) (3) Data frame handling I0821 23:26:02.639157 6 log.go:172] (0xc002fb1810) Data frame received for 5 I0821 23:26:02.639168 6 log.go:172] (0xc002552000) (5) Data frame handling I0821 23:26:02.640585 6 log.go:172] (0xc002fb1810) Data frame received for 1 I0821 23:26:02.640598 6 log.go:172] (0xc0029d0780) (1) Data frame handling I0821 23:26:02.640608 6 log.go:172] (0xc0029d0780) (1) Data frame sent I0821 23:26:02.640615 6 log.go:172] (0xc002fb1810) (0xc0029d0780) Stream removed, broadcasting: 1 I0821 23:26:02.640669 6 log.go:172] (0xc002fb1810) (0xc0029d0780) Stream removed, broadcasting: 1 I0821 23:26:02.640678 6 log.go:172] (0xc002fb1810) (0xc0029d0820) Stream removed, broadcasting: 3 I0821 23:26:02.640966 6 log.go:172] (0xc002fb1810) Go away received I0821 23:26:02.641038 6 log.go:172] (0xc002fb1810) (0xc002552000) Stream removed, broadcasting: 5 Aug 21 23:26:02.641: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 21 23:26:02.641: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.641: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.673164 6 log.go:172] (0xc002ec8bb0) (0xc002954dc0) Create stream I0821 23:26:02.673201 6 log.go:172] (0xc002ec8bb0) (0xc002954dc0) Stream added, broadcasting: 1 I0821 23:26:02.684259 6 log.go:172] (0xc002ec8bb0) Reply frame received for 1 I0821 23:26:02.684309 6 log.go:172] (0xc002ec8bb0) (0xc002581680) Create stream I0821 23:26:02.684323 6 log.go:172] (0xc002ec8bb0) (0xc002581680) Stream added, broadcasting: 3 I0821 23:26:02.685624 6 log.go:172] (0xc002ec8bb0) Reply frame received for 3 I0821 23:26:02.685674 6 log.go:172] (0xc002ec8bb0) (0xc002552280) Create stream I0821 23:26:02.685684 6 log.go:172] (0xc002ec8bb0) (0xc002552280) Stream added, broadcasting: 5 I0821 23:26:02.686516 6 log.go:172] (0xc002ec8bb0) Reply frame received for 5 I0821 23:26:02.746618 6 log.go:172] (0xc002ec8bb0) Data frame received for 5 I0821 23:26:02.746668 6 log.go:172] (0xc002552280) (5) Data frame handling I0821 23:26:02.746715 6 log.go:172] (0xc002ec8bb0) Data frame received for 3 I0821 23:26:02.746739 6 log.go:172] (0xc002581680) (3) Data frame handling I0821 23:26:02.746767 6 log.go:172] (0xc002581680) (3) Data frame sent I0821 23:26:02.746788 6 log.go:172] (0xc002ec8bb0) Data frame received for 3 I0821 23:26:02.746805 6 log.go:172] (0xc002581680) (3) Data frame handling I0821 23:26:02.748288 6 log.go:172] (0xc002ec8bb0) Data frame received for 1 I0821 23:26:02.748308 6 log.go:172] (0xc002954dc0) (1) Data frame handling I0821 23:26:02.748318 6 log.go:172] (0xc002954dc0) (1) Data frame sent I0821 23:26:02.748332 6 log.go:172] (0xc002ec8bb0) (0xc002954dc0) Stream removed, broadcasting: 1 I0821 23:26:02.748350 6 log.go:172] (0xc002ec8bb0) Go away received I0821 23:26:02.748426 6 log.go:172] (0xc002ec8bb0) (0xc002954dc0) Stream removed, broadcasting: 1 I0821 23:26:02.748444 6 log.go:172] (0xc002ec8bb0) (0xc002581680) Stream removed, broadcasting: 3 I0821 23:26:02.748454 6 log.go:172] (0xc002ec8bb0) (0xc002552280) Stream removed, broadcasting: 5 Aug 21 23:26:02.748: INFO: Exec stderr: "" Aug 21 23:26:02.748: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.748: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.779673 6 log.go:172] (0xc002ec91e0) (0xc002954fa0) Create stream I0821 23:26:02.779695 6 log.go:172] (0xc002ec91e0) (0xc002954fa0) Stream added, broadcasting: 1 I0821 23:26:02.781895 6 log.go:172] (0xc002ec91e0) Reply frame received for 1 I0821 23:26:02.781944 6 log.go:172] (0xc002ec91e0) (0xc002955040) Create stream I0821 23:26:02.781965 6 log.go:172] (0xc002ec91e0) (0xc002955040) Stream added, broadcasting: 3 I0821 23:26:02.783065 6 log.go:172] (0xc002ec91e0) Reply frame received for 3 I0821 23:26:02.783096 6 log.go:172] (0xc002ec91e0) (0xc002581720) Create stream I0821 23:26:02.783111 6 log.go:172] (0xc002ec91e0) (0xc002581720) Stream added, broadcasting: 5 I0821 23:26:02.784266 6 log.go:172] (0xc002ec91e0) Reply frame received for 5 I0821 23:26:02.829743 6 log.go:172] (0xc002ec91e0) Data frame received for 3 I0821 23:26:02.829781 6 log.go:172] (0xc002955040) (3) Data frame handling I0821 23:26:02.829798 6 log.go:172] (0xc002955040) (3) Data frame sent I0821 23:26:02.829822 6 log.go:172] (0xc002ec91e0) Data frame received for 3 I0821 23:26:02.829835 6 log.go:172] (0xc002955040) (3) Data frame handling I0821 23:26:02.829849 6 log.go:172] (0xc002ec91e0) Data frame received for 5 I0821 23:26:02.829867 6 log.go:172] (0xc002581720) (5) Data frame handling I0821 23:26:02.831169 6 log.go:172] (0xc002ec91e0) Data frame received for 1 I0821 23:26:02.831198 6 log.go:172] (0xc002954fa0) (1) Data frame handling I0821 23:26:02.831211 6 log.go:172] (0xc002954fa0) (1) Data frame sent I0821 23:26:02.831221 6 log.go:172] (0xc002ec91e0) (0xc002954fa0) Stream removed, broadcasting: 1 I0821 23:26:02.831237 6 log.go:172] (0xc002ec91e0) Go away received I0821 23:26:02.831382 6 log.go:172] (0xc002ec91e0) (0xc002954fa0) Stream removed, broadcasting: 1 I0821 23:26:02.831399 6 log.go:172] (0xc002ec91e0) (0xc002955040) Stream removed, broadcasting: 3 I0821 23:26:02.831409 6 log.go:172] (0xc002ec91e0) (0xc002581720) Stream removed, broadcasting: 5 Aug 21 23:26:02.831: INFO: Exec stderr: "" Aug 21 23:26:02.831: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.831: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.856066 6 log.go:172] (0xc002fb1e40) (0xc0029d0aa0) Create stream I0821 23:26:02.856089 6 log.go:172] (0xc002fb1e40) (0xc0029d0aa0) Stream added, broadcasting: 1 I0821 23:26:02.858261 6 log.go:172] (0xc002fb1e40) Reply frame received for 1 I0821 23:26:02.858299 6 log.go:172] (0xc002fb1e40) (0xc0029550e0) Create stream I0821 23:26:02.858309 6 log.go:172] (0xc002fb1e40) (0xc0029550e0) Stream added, broadcasting: 3 I0821 23:26:02.859146 6 log.go:172] (0xc002fb1e40) Reply frame received for 3 I0821 23:26:02.859172 6 log.go:172] (0xc002fb1e40) (0xc002955180) Create stream I0821 23:26:02.859180 6 log.go:172] (0xc002fb1e40) (0xc002955180) Stream added, broadcasting: 5 I0821 23:26:02.859904 6 log.go:172] (0xc002fb1e40) Reply frame received for 5 I0821 23:26:02.929439 6 log.go:172] (0xc002fb1e40) Data frame received for 3 I0821 23:26:02.929473 6 log.go:172] (0xc0029550e0) (3) Data frame handling I0821 23:26:02.929489 6 log.go:172] (0xc0029550e0) (3) Data frame sent I0821 23:26:02.929499 6 log.go:172] (0xc002fb1e40) Data frame received for 3 I0821 23:26:02.929506 6 log.go:172] (0xc0029550e0) (3) Data frame handling I0821 23:26:02.929528 6 log.go:172] (0xc002fb1e40) Data frame received for 5 I0821 23:26:02.929539 6 log.go:172] (0xc002955180) (5) Data frame handling I0821 23:26:02.930954 6 log.go:172] (0xc002fb1e40) Data frame received for 1 I0821 23:26:02.930988 6 log.go:172] (0xc0029d0aa0) (1) Data frame handling I0821 23:26:02.931002 6 log.go:172] (0xc0029d0aa0) (1) Data frame sent I0821 23:26:02.931016 6 log.go:172] (0xc002fb1e40) (0xc0029d0aa0) Stream removed, broadcasting: 1 I0821 23:26:02.931075 6 log.go:172] (0xc002fb1e40) Go away received I0821 23:26:02.931108 6 log.go:172] (0xc002fb1e40) (0xc0029d0aa0) Stream removed, broadcasting: 1 I0821 23:26:02.931122 6 log.go:172] (0xc002fb1e40) (0xc0029550e0) Stream removed, broadcasting: 3 I0821 23:26:02.931132 6 log.go:172] (0xc002fb1e40) (0xc002955180) Stream removed, broadcasting: 5 Aug 21 23:26:02.931: INFO: Exec stderr: "" Aug 21 23:26:02.931: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5939 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 23:26:02.931: INFO: >>> kubeConfig: /root/.kube/config I0821 23:26:02.962806 6 log.go:172] (0xc001340840) (0xc0029d0c80) Create stream I0821 23:26:02.962832 6 log.go:172] (0xc001340840) (0xc0029d0c80) Stream added, broadcasting: 1 I0821 23:26:02.965015 6 log.go:172] (0xc001340840) Reply frame received for 1 I0821 23:26:02.965069 6 log.go:172] (0xc001340840) (0xc0025523c0) Create stream I0821 23:26:02.965092 6 log.go:172] (0xc001340840) (0xc0025523c0) Stream added, broadcasting: 3 I0821 23:26:02.966103 6 log.go:172] (0xc001340840) Reply frame received for 3 I0821 23:26:02.966163 6 log.go:172] (0xc001340840) (0xc0029552c0) Create stream I0821 23:26:02.966191 6 log.go:172] (0xc001340840) (0xc0029552c0) Stream added, broadcasting: 5 I0821 23:26:02.967107 6 log.go:172] (0xc001340840) Reply frame received for 5 I0821 23:26:03.034670 6 log.go:172] (0xc001340840) Data frame received for 5 I0821 23:26:03.034711 6 log.go:172] (0xc0029552c0) (5) Data frame handling I0821 23:26:03.034734 6 log.go:172] (0xc001340840) Data frame received for 3 I0821 23:26:03.034745 6 log.go:172] (0xc0025523c0) (3) Data frame handling I0821 23:26:03.034759 6 log.go:172] (0xc0025523c0) (3) Data frame sent I0821 23:26:03.034771 6 log.go:172] (0xc001340840) Data frame received for 3 I0821 23:26:03.034806 6 log.go:172] (0xc0025523c0) (3) Data frame handling I0821 23:26:03.036269 6 log.go:172] (0xc001340840) Data frame received for 1 I0821 23:26:03.036296 6 log.go:172] (0xc0029d0c80) (1) Data frame handling I0821 23:26:03.036328 6 log.go:172] (0xc0029d0c80) (1) Data frame sent I0821 23:26:03.036350 6 log.go:172] (0xc001340840) (0xc0029d0c80) Stream removed, broadcasting: 1 I0821 23:26:03.036368 6 log.go:172] (0xc001340840) Go away received I0821 23:26:03.036551 6 log.go:172] (0xc001340840) (0xc0029d0c80) Stream removed, broadcasting: 1 I0821 23:26:03.036586 6 log.go:172] (0xc001340840) (0xc0025523c0) Stream removed, broadcasting: 3 I0821 23:26:03.036609 6 log.go:172] (0xc001340840) (0xc0029552c0) Stream removed, broadcasting: 5 Aug 21 23:26:03.036: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:26:03.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5939" for this suite. • [SLOW TEST:11.136 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":821,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:26:03.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:26:03.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8157" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":50,"skipped":839,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:26:03.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Aug 21 23:26:03.302: INFO: Waiting up to 5m0s for pod "var-expansion-870b429c-b410-4d4f-af3f-2d5c5f455172" in namespace "var-expansion-403" to be "success or failure" Aug 21 23:26:03.312: INFO: Pod "var-expansion-870b429c-b410-4d4f-af3f-2d5c5f455172": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028303ms Aug 21 23:26:05.316: INFO: Pod "var-expansion-870b429c-b410-4d4f-af3f-2d5c5f455172": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0140691s Aug 21 23:26:07.319: INFO: Pod "var-expansion-870b429c-b410-4d4f-af3f-2d5c5f455172": Phase="Running", Reason="", readiness=true. Elapsed: 4.017200018s Aug 21 23:26:09.322: INFO: Pod "var-expansion-870b429c-b410-4d4f-af3f-2d5c5f455172": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020742688s STEP: Saw pod success Aug 21 23:26:09.322: INFO: Pod "var-expansion-870b429c-b410-4d4f-af3f-2d5c5f455172" satisfied condition "success or failure" Aug 21 23:26:09.326: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-870b429c-b410-4d4f-af3f-2d5c5f455172 container dapi-container: STEP: delete the pod Aug 21 23:26:09.361: INFO: Waiting for pod var-expansion-870b429c-b410-4d4f-af3f-2d5c5f455172 to disappear Aug 21 23:26:09.372: INFO: Pod var-expansion-870b429c-b410-4d4f-af3f-2d5c5f455172 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:26:09.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-403" for this suite. • [SLOW TEST:6.162 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":851,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:26:09.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Aug 21 23:26:09.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3570' Aug 21 23:26:09.781: INFO: stderr: "" Aug 21 23:26:09.781: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 23:26:09.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3570' Aug 21 23:26:09.930: INFO: stderr: "" Aug 21 23:26:09.930: INFO: stdout: "update-demo-nautilus-6v2nb update-demo-nautilus-w8hjt " Aug 21 23:26:09.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6v2nb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:10.063: INFO: stderr: "" Aug 21 23:26:10.063: INFO: stdout: "" Aug 21 23:26:10.063: INFO: update-demo-nautilus-6v2nb is created but not running Aug 21 23:26:15.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3570' Aug 21 23:26:15.162: INFO: stderr: "" Aug 21 23:26:15.162: INFO: stdout: "update-demo-nautilus-6v2nb update-demo-nautilus-w8hjt " Aug 21 23:26:15.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6v2nb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:15.248: INFO: stderr: "" Aug 21 23:26:15.249: INFO: stdout: "true" Aug 21 23:26:15.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6v2nb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:15.351: INFO: stderr: "" Aug 21 23:26:15.351: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 23:26:15.351: INFO: validating pod update-demo-nautilus-6v2nb Aug 21 23:26:15.355: INFO: got data: { "image": "nautilus.jpg" } Aug 21 23:26:15.355: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 23:26:15.355: INFO: update-demo-nautilus-6v2nb is verified up and running Aug 21 23:26:15.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8hjt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:15.460: INFO: stderr: "" Aug 21 23:26:15.460: INFO: stdout: "true" Aug 21 23:26:15.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8hjt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:15.552: INFO: stderr: "" Aug 21 23:26:15.552: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 23:26:15.552: INFO: validating pod update-demo-nautilus-w8hjt Aug 21 23:26:15.556: INFO: got data: { "image": "nautilus.jpg" } Aug 21 23:26:15.556: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 23:26:15.556: INFO: update-demo-nautilus-w8hjt is verified up and running STEP: scaling down the replication controller Aug 21 23:26:15.559: INFO: scanned /root for discovery docs: Aug 21 23:26:15.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3570' Aug 21 23:26:16.715: INFO: stderr: "" Aug 21 23:26:16.715: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 23:26:16.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3570' Aug 21 23:26:16.811: INFO: stderr: "" Aug 21 23:26:16.811: INFO: stdout: "update-demo-nautilus-6v2nb update-demo-nautilus-w8hjt " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 21 23:26:21.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3570' Aug 21 23:26:21.908: INFO: stderr: "" Aug 21 23:26:21.908: INFO: stdout: "update-demo-nautilus-6v2nb " Aug 21 23:26:21.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6v2nb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:22.000: INFO: stderr: "" Aug 21 23:26:22.000: INFO: stdout: "true" Aug 21 23:26:22.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6v2nb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:22.097: INFO: stderr: "" Aug 21 23:26:22.097: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 23:26:22.097: INFO: validating pod update-demo-nautilus-6v2nb Aug 21 23:26:22.100: INFO: got data: { "image": "nautilus.jpg" } Aug 21 23:26:22.100: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 23:26:22.100: INFO: update-demo-nautilus-6v2nb is verified up and running STEP: scaling up the replication controller Aug 21 23:26:22.102: INFO: scanned /root for discovery docs: Aug 21 23:26:22.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3570' Aug 21 23:26:23.262: INFO: stderr: "" Aug 21 23:26:23.262: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 23:26:23.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3570' Aug 21 23:26:23.363: INFO: stderr: "" Aug 21 23:26:23.363: INFO: stdout: "update-demo-nautilus-6tsb6 update-demo-nautilus-6v2nb " Aug 21 23:26:23.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tsb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:23.463: INFO: stderr: "" Aug 21 23:26:23.463: INFO: stdout: "" Aug 21 23:26:23.463: INFO: update-demo-nautilus-6tsb6 is created but not running Aug 21 23:26:28.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3570' Aug 21 23:26:28.605: INFO: stderr: "" Aug 21 23:26:28.605: INFO: stdout: "update-demo-nautilus-6tsb6 update-demo-nautilus-6v2nb " Aug 21 23:26:28.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tsb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:28.691: INFO: stderr: "" Aug 21 23:26:28.691: INFO: stdout: "true" Aug 21 23:26:28.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tsb6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:28.788: INFO: stderr: "" Aug 21 23:26:28.788: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 23:26:28.788: INFO: validating pod update-demo-nautilus-6tsb6 Aug 21 23:26:28.792: INFO: got data: { "image": "nautilus.jpg" } Aug 21 23:26:28.792: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 23:26:28.792: INFO: update-demo-nautilus-6tsb6 is verified up and running Aug 21 23:26:28.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6v2nb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:28.888: INFO: stderr: "" Aug 21 23:26:28.888: INFO: stdout: "true" Aug 21 23:26:28.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6v2nb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3570' Aug 21 23:26:28.976: INFO: stderr: "" Aug 21 23:26:28.977: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 23:26:28.977: INFO: validating pod update-demo-nautilus-6v2nb Aug 21 23:26:28.979: INFO: got data: { "image": "nautilus.jpg" } Aug 21 23:26:28.979: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 23:26:28.979: INFO: update-demo-nautilus-6v2nb is verified up and running STEP: using delete to clean up resources Aug 21 23:26:28.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3570' Aug 21 23:26:29.085: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 23:26:29.085: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 21 23:26:29.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3570' Aug 21 23:26:29.192: INFO: stderr: "No resources found in kubectl-3570 namespace.\n" Aug 21 23:26:29.192: INFO: stdout: "" Aug 21 23:26:29.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3570 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 23:26:29.287: INFO: stderr: "" Aug 21 23:26:29.287: INFO: stdout: "update-demo-nautilus-6tsb6\nupdate-demo-nautilus-6v2nb\n" Aug 21 23:26:29.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3570' Aug 21 23:26:29.880: INFO: stderr: "No resources found in kubectl-3570 namespace.\n" Aug 21 23:26:29.880: INFO: stdout: "" Aug 21 23:26:29.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3570 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 23:26:29.969: INFO: stderr: "" Aug 21 23:26:29.969: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:26:29.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3570" for this suite. • [SLOW TEST:20.598 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should scale a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":52,"skipped":856,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:26:29.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-086314c3-0787-41cc-87b5-ae613dc2f579 STEP: Creating a pod to test consume secrets Aug 21 23:26:30.356: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0ada3f68-5978-4e3e-9fa4-453fded621d3" in namespace "projected-6043" to be "success or failure" Aug 21 23:26:30.366: INFO: Pod "pod-projected-secrets-0ada3f68-5978-4e3e-9fa4-453fded621d3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.503648ms Aug 21 23:26:32.370: INFO: Pod "pod-projected-secrets-0ada3f68-5978-4e3e-9fa4-453fded621d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014141363s Aug 21 23:26:34.374: INFO: Pod "pod-projected-secrets-0ada3f68-5978-4e3e-9fa4-453fded621d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018339301s STEP: Saw pod success Aug 21 23:26:34.374: INFO: Pod "pod-projected-secrets-0ada3f68-5978-4e3e-9fa4-453fded621d3" satisfied condition "success or failure" Aug 21 23:26:34.377: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-0ada3f68-5978-4e3e-9fa4-453fded621d3 container projected-secret-volume-test: STEP: delete the pod Aug 21 23:26:34.417: INFO: Waiting for pod pod-projected-secrets-0ada3f68-5978-4e3e-9fa4-453fded621d3 to disappear Aug 21 23:26:34.426: INFO: Pod pod-projected-secrets-0ada3f68-5978-4e3e-9fa4-453fded621d3 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:26:34.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6043" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":859,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:26:34.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-6df8ff5f-7f42-42c6-92b7-07a12488b1df in namespace container-probe-7703 Aug 21 23:26:38.538: INFO: Started pod test-webserver-6df8ff5f-7f42-42c6-92b7-07a12488b1df in namespace container-probe-7703 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 23:26:38.541: INFO: Initial restart count of pod test-webserver-6df8ff5f-7f42-42c6-92b7-07a12488b1df is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:30:39.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7703" for this suite. • [SLOW TEST:245.292 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":860,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:30:39.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 23:30:40.835: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 23:30:43.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649440, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649440, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649440, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649440, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 23:30:45.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649440, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649440, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649440, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649440, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 23:30:48.201: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:30:48.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6" for this suite. STEP: Destroying namespace "webhook-6-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.635 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":55,"skipped":881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:30:48.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:30:48.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4008" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":56,"skipped":925,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:30:48.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 21 23:30:48.469: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:31:01.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5597" for this suite. • [SLOW TEST:13.317 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":936,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:31:01.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0821 23:31:11.846617 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 23:31:11.846: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:31:11.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3387" for this suite. • [SLOW TEST:10.131 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":58,"skipped":966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:31:11.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-a50de5c0-6a2b-4ff5-8f6c-2cd46e594a8e in namespace container-probe-8667 Aug 21 23:31:18.075: INFO: Started pod busybox-a50de5c0-6a2b-4ff5-8f6c-2cd46e594a8e in namespace container-probe-8667 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 23:31:18.078: INFO: Initial restart count of pod busybox-a50de5c0-6a2b-4ff5-8f6c-2cd46e594a8e is 0 Aug 21 23:32:08.336: INFO: Restart count of pod container-probe-8667/busybox-a50de5c0-6a2b-4ff5-8f6c-2cd46e594a8e is now 1 (50.257942731s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:32:08.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8667" for this suite. • [SLOW TEST:56.618 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":990,"failed":0} S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:32:08.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-a6c98c9b-7003-4342-8831-4331f8c1ccf2 STEP: Creating secret with name secret-projected-all-test-volume-2da4523c-50a0-47e9-bcb0-a6011db1a916 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 21 23:32:08.547: INFO: Waiting up to 5m0s for pod "projected-volume-28805535-c345-49e6-943c-d8f3bf140f61" in namespace "projected-180" to be "success or failure" Aug 21 23:32:08.609: INFO: Pod "projected-volume-28805535-c345-49e6-943c-d8f3bf140f61": Phase="Pending", Reason="", readiness=false. Elapsed: 62.360307ms Aug 21 23:32:10.613: INFO: Pod "projected-volume-28805535-c345-49e6-943c-d8f3bf140f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065736879s Aug 21 23:32:12.617: INFO: Pod "projected-volume-28805535-c345-49e6-943c-d8f3bf140f61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069916351s STEP: Saw pod success Aug 21 23:32:12.617: INFO: Pod "projected-volume-28805535-c345-49e6-943c-d8f3bf140f61" satisfied condition "success or failure" Aug 21 23:32:12.620: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-28805535-c345-49e6-943c-d8f3bf140f61 container projected-all-volume-test: STEP: delete the pod Aug 21 23:32:12.651: INFO: Waiting for pod projected-volume-28805535-c345-49e6-943c-d8f3bf140f61 to disappear Aug 21 23:32:12.896: INFO: Pod projected-volume-28805535-c345-49e6-943c-d8f3bf140f61 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:32:12.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-180" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":60,"skipped":991,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:32:12.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d0dc7ece-47e9-4ffb-847f-5ebaa989045c STEP: Creating a pod to test consume configMaps Aug 21 23:32:12.980: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b6113ec-12f7-413c-814a-b15efd8cdc0f" in namespace "projected-5139" to be "success or failure" Aug 21 23:32:13.016: INFO: Pod "pod-projected-configmaps-0b6113ec-12f7-413c-814a-b15efd8cdc0f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.602394ms Aug 21 23:32:15.020: INFO: Pod "pod-projected-configmaps-0b6113ec-12f7-413c-814a-b15efd8cdc0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040547045s Aug 21 23:32:17.025: INFO: Pod "pod-projected-configmaps-0b6113ec-12f7-413c-814a-b15efd8cdc0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044652516s STEP: Saw pod success Aug 21 23:32:17.025: INFO: Pod "pod-projected-configmaps-0b6113ec-12f7-413c-814a-b15efd8cdc0f" satisfied condition "success or failure" Aug 21 23:32:17.027: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-0b6113ec-12f7-413c-814a-b15efd8cdc0f container projected-configmap-volume-test: STEP: delete the pod Aug 21 23:32:17.073: INFO: Waiting for pod pod-projected-configmaps-0b6113ec-12f7-413c-814a-b15efd8cdc0f to disappear Aug 21 23:32:17.086: INFO: Pod pod-projected-configmaps-0b6113ec-12f7-413c-814a-b15efd8cdc0f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:32:17.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5139" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":992,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:32:17.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 21 23:32:25.276: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 23:32:25.413: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 23:32:27.413: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 23:32:27.417: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 23:32:29.413: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 23:32:29.417: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 23:32:31.413: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 23:32:31.417: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 23:32:33.413: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 23:32:33.417: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:32:33.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4748" for this suite. • [SLOW TEST:16.333 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":992,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:32:33.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 23:32:33.512: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35842f62-672d-4bf5-afeb-a3a571406851" in namespace "downward-api-2299" to be "success or failure" Aug 21 23:32:33.523: INFO: Pod "downwardapi-volume-35842f62-672d-4bf5-afeb-a3a571406851": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132224ms Aug 21 23:32:35.526: INFO: Pod "downwardapi-volume-35842f62-672d-4bf5-afeb-a3a571406851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013895104s Aug 21 23:32:37.556: INFO: Pod "downwardapi-volume-35842f62-672d-4bf5-afeb-a3a571406851": Phase="Running", Reason="", readiness=true. Elapsed: 4.043393459s Aug 21 23:32:39.559: INFO: Pod "downwardapi-volume-35842f62-672d-4bf5-afeb-a3a571406851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046134498s STEP: Saw pod success Aug 21 23:32:39.559: INFO: Pod "downwardapi-volume-35842f62-672d-4bf5-afeb-a3a571406851" satisfied condition "success or failure" Aug 21 23:32:39.576: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-35842f62-672d-4bf5-afeb-a3a571406851 container client-container: STEP: delete the pod Aug 21 23:32:39.619: INFO: Waiting for pod downwardapi-volume-35842f62-672d-4bf5-afeb-a3a571406851 to disappear Aug 21 23:32:39.632: INFO: Pod downwardapi-volume-35842f62-672d-4bf5-afeb-a3a571406851 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:32:39.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2299" for this suite. • [SLOW TEST:6.216 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1010,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:32:39.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 23:32:40.886: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 23:32:42.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649561, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649561, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649561, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649560, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 23:32:45.925: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:32:45.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8135-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:32:47.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-183" for this suite. STEP: Destroying namespace "webhook-183-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.789 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":64,"skipped":1023,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:32:47.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-ed211781-8ae4-49b2-bce0-517b7f14da03 in namespace container-probe-1213 Aug 21 23:32:53.546: INFO: Started pod busybox-ed211781-8ae4-49b2-bce0-517b7f14da03 in namespace container-probe-1213 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 23:32:53.549: INFO: Initial restart count of pod busybox-ed211781-8ae4-49b2-bce0-517b7f14da03 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:36:55.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1213" for this suite. • [SLOW TEST:248.546 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1043,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:36:55.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:37:02.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7995" for this suite. • [SLOW TEST:6.103 seconds] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:37:02.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:37:06.200: INFO: Waiting up to 5m0s for pod "client-envvars-c01e6344-1449-4a17-bdc3-5ccbf9d57d92" in namespace "pods-5075" to be "success or failure" Aug 21 23:37:06.211: INFO: Pod "client-envvars-c01e6344-1449-4a17-bdc3-5ccbf9d57d92": Phase="Pending", Reason="", readiness=false. Elapsed: 11.755434ms Aug 21 23:37:08.356: INFO: Pod "client-envvars-c01e6344-1449-4a17-bdc3-5ccbf9d57d92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156148546s Aug 21 23:37:10.360: INFO: Pod "client-envvars-c01e6344-1449-4a17-bdc3-5ccbf9d57d92": Phase="Running", Reason="", readiness=true. Elapsed: 4.159909314s Aug 21 23:37:12.364: INFO: Pod "client-envvars-c01e6344-1449-4a17-bdc3-5ccbf9d57d92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.16422009s STEP: Saw pod success Aug 21 23:37:12.364: INFO: Pod "client-envvars-c01e6344-1449-4a17-bdc3-5ccbf9d57d92" satisfied condition "success or failure" Aug 21 23:37:12.367: INFO: Trying to get logs from node jerma-worker pod client-envvars-c01e6344-1449-4a17-bdc3-5ccbf9d57d92 container env3cont: STEP: delete the pod Aug 21 23:37:12.429: INFO: Waiting for pod client-envvars-c01e6344-1449-4a17-bdc3-5ccbf9d57d92 to disappear Aug 21 23:37:12.458: INFO: Pod client-envvars-c01e6344-1449-4a17-bdc3-5ccbf9d57d92 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:37:12.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5075" for this suite. • [SLOW TEST:10.420 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1072,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:37:12.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Aug 21 23:37:17.121: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7389 pod-service-account-c9e23b74-e663-46f0-b868-d4f16c11a26a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 21 23:37:21.979: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7389 pod-service-account-c9e23b74-e663-46f0-b868-d4f16c11a26a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 21 23:37:22.189: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7389 pod-service-account-c9e23b74-e663-46f0-b868-d4f16c11a26a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:37:22.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7389" for this suite. • [SLOW TEST:9.917 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":68,"skipped":1095,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:37:22.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:37:28.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5928" for this suite. STEP: Destroying namespace "nsdeletetest-417" for this suite. Aug 21 23:37:28.721: INFO: Namespace nsdeletetest-417 was already deleted STEP: Destroying namespace "nsdeletetest-7979" for this suite. • [SLOW TEST:6.304 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":69,"skipped":1102,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:37:28.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4332/secret-test-7b4fc3b6-0e21-49ac-b942-2f45d2da639a STEP: Creating a pod to test consume secrets Aug 21 23:37:28.834: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1bf238a-82ef-4f1a-ab78-3069be398aa2" in namespace "secrets-4332" to be "success or failure" Aug 21 23:37:28.841: INFO: Pod "pod-configmaps-a1bf238a-82ef-4f1a-ab78-3069be398aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.385761ms Aug 21 23:37:30.845: INFO: Pod "pod-configmaps-a1bf238a-82ef-4f1a-ab78-3069be398aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01081067s Aug 21 23:37:32.849: INFO: Pod "pod-configmaps-a1bf238a-82ef-4f1a-ab78-3069be398aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014963973s Aug 21 23:37:34.889: INFO: Pod "pod-configmaps-a1bf238a-82ef-4f1a-ab78-3069be398aa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054594781s STEP: Saw pod success Aug 21 23:37:34.889: INFO: Pod "pod-configmaps-a1bf238a-82ef-4f1a-ab78-3069be398aa2" satisfied condition "success or failure" Aug 21 23:37:34.891: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a1bf238a-82ef-4f1a-ab78-3069be398aa2 container env-test: STEP: delete the pod Aug 21 23:37:35.070: INFO: Waiting for pod pod-configmaps-a1bf238a-82ef-4f1a-ab78-3069be398aa2 to disappear Aug 21 23:37:35.217: INFO: Pod pod-configmaps-a1bf238a-82ef-4f1a-ab78-3069be398aa2 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:37:35.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4332" for this suite. • [SLOW TEST:6.500 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:37:35.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:37:35.736: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5e2a3860-9287-4ea3-ab84-0b7666e3c81f" in namespace "security-context-test-5418" to be "success or failure" Aug 21 23:37:35.756: INFO: Pod "busybox-readonly-false-5e2a3860-9287-4ea3-ab84-0b7666e3c81f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.904721ms Aug 21 23:37:37.760: INFO: Pod "busybox-readonly-false-5e2a3860-9287-4ea3-ab84-0b7666e3c81f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023907028s Aug 21 23:37:39.763: INFO: Pod "busybox-readonly-false-5e2a3860-9287-4ea3-ab84-0b7666e3c81f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02743798s Aug 21 23:37:41.767: INFO: Pod "busybox-readonly-false-5e2a3860-9287-4ea3-ab84-0b7666e3c81f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031256408s Aug 21 23:37:41.767: INFO: Pod "busybox-readonly-false-5e2a3860-9287-4ea3-ab84-0b7666e3c81f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:37:41.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5418" for this suite. • [SLOW TEST:6.548 seconds] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1156,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:37:41.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-5901d48f-0ff9-46ee-84c5-7e4eb0c250ef STEP: Creating a pod to test consume configMaps Aug 21 23:37:42.289: INFO: Waiting up to 5m0s for pod "pod-configmaps-352539df-6e4c-419c-ac43-7d41430a6afc" in namespace "configmap-2784" to be "success or failure" Aug 21 23:37:42.339: INFO: Pod "pod-configmaps-352539df-6e4c-419c-ac43-7d41430a6afc": Phase="Pending", Reason="", readiness=false. Elapsed: 49.506304ms Aug 21 23:37:44.342: INFO: Pod "pod-configmaps-352539df-6e4c-419c-ac43-7d41430a6afc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052920864s Aug 21 23:37:46.345: INFO: Pod "pod-configmaps-352539df-6e4c-419c-ac43-7d41430a6afc": Phase="Running", Reason="", readiness=true. Elapsed: 4.056111659s Aug 21 23:37:48.349: INFO: Pod "pod-configmaps-352539df-6e4c-419c-ac43-7d41430a6afc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060372791s STEP: Saw pod success Aug 21 23:37:48.350: INFO: Pod "pod-configmaps-352539df-6e4c-419c-ac43-7d41430a6afc" satisfied condition "success or failure" Aug 21 23:37:48.352: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-352539df-6e4c-419c-ac43-7d41430a6afc container configmap-volume-test: STEP: delete the pod Aug 21 23:37:48.443: INFO: Waiting for pod pod-configmaps-352539df-6e4c-419c-ac43-7d41430a6afc to disappear Aug 21 23:37:48.446: INFO: Pod pod-configmaps-352539df-6e4c-419c-ac43-7d41430a6afc no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:37:48.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2784" for this suite. • [SLOW TEST:6.758 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1169,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:37:48.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 21 23:37:48.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-696' Aug 21 23:37:48.727: INFO: stderr: "" Aug 21 23:37:48.727: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765 Aug 21 23:37:48.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-696' Aug 21 23:38:01.632: INFO: stderr: "" Aug 21 23:38:01.632: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:38:01.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-696" for this suite. • [SLOW TEST:13.113 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":73,"skipped":1181,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:38:01.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 21 23:38:01.841: INFO: PodSpec: initContainers in spec.initContainers Aug 21 23:38:56.480: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-16674319-1d13-41e6-91da-7288cc84d8b1", GenerateName:"", Namespace:"init-container-1483", SelfLink:"/api/v1/namespaces/init-container-1483/pods/pod-init-16674319-1d13-41e6-91da-7288cc84d8b1", UID:"1ca9d9b3-ddc9-438f-921a-22bd9def933a", ResourceVersion:"2283284", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733649881, loc:(*time.Location)(0x7931640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"841973624"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vhvz9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003c5a340), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vhvz9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vhvz9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vhvz9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003268ed8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc005c38480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003268f60)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003268f80)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003268f88), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003268f8c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649881, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649881, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649881, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733649881, loc:(*time.Location)(0x7931640)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.162", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.162"}}, StartTime:(*v1.Time)(0xc0028e8780), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0028e8800), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f7a9a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f5a96ad4eb0bae1ebdfd62da6475ca871fbcfdbdcca4e4572bc11bbd0432d3b7", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0028e8860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0028e87c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00326900f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:38:56.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1483" for this suite. • [SLOW TEST:55.172 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":74,"skipped":1200,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:38:56.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:38:57.174: INFO: Create a RollingUpdate DaemonSet Aug 21 23:38:57.179: INFO: Check that daemon pods launch on every node of the cluster Aug 21 23:38:57.375: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:38:57.378: INFO: Number of nodes with available pods: 0 Aug 21 23:38:57.378: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:38:58.382: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:38:58.385: INFO: Number of nodes with available pods: 0 Aug 21 23:38:58.385: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:38:59.383: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:38:59.387: INFO: Number of nodes with available pods: 0 Aug 21 23:38:59.387: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:39:00.560: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:39:00.564: INFO: Number of nodes with available pods: 0 Aug 21 23:39:00.564: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:39:01.383: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:39:01.386: INFO: Number of nodes with available pods: 0 Aug 21 23:39:01.386: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:39:02.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:39:02.389: INFO: Number of nodes with available pods: 0 Aug 21 23:39:02.389: INFO: Node jerma-worker is running more than one daemon pod Aug 21 23:39:03.382: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:39:03.386: INFO: Number of nodes with available pods: 2 Aug 21 23:39:03.386: INFO: Number of running nodes: 2, number of available pods: 2 Aug 21 23:39:03.386: INFO: Update the DaemonSet to trigger a rollout Aug 21 23:39:03.392: INFO: Updating DaemonSet daemon-set Aug 21 23:39:12.428: INFO: Roll back the DaemonSet before rollout is complete Aug 21 23:39:12.435: INFO: Updating DaemonSet daemon-set Aug 21 23:39:12.435: INFO: Make sure DaemonSet rollback is complete Aug 21 23:39:12.464: INFO: Wrong image for pod: daemon-set-6nn7p. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 21 23:39:12.464: INFO: Pod daemon-set-6nn7p is not available Aug 21 23:39:12.480: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:39:13.485: INFO: Wrong image for pod: daemon-set-6nn7p. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 21 23:39:13.485: INFO: Pod daemon-set-6nn7p is not available Aug 21 23:39:13.489: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 23:39:14.538: INFO: Pod daemon-set-nq9rt is not available Aug 21 23:39:14.569: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1498, will wait for the garbage collector to delete the pods Aug 21 23:39:14.651: INFO: Deleting DaemonSet.extensions daemon-set took: 6.920333ms Aug 21 23:39:14.952: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.261867ms Aug 21 23:39:21.656: INFO: Number of nodes with available pods: 0 Aug 21 23:39:21.656: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 23:39:21.659: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1498/daemonsets","resourceVersion":"2283445"},"items":null} Aug 21 23:39:21.661: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1498/pods","resourceVersion":"2283445"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:39:21.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1498" for this suite. • [SLOW TEST:24.856 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":75,"skipped":1204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:39:21.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Aug 21 23:39:22.268: INFO: created pod pod-service-account-defaultsa Aug 21 23:39:22.269: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 21 23:39:22.276: INFO: created pod pod-service-account-mountsa Aug 21 23:39:22.276: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 21 23:39:22.282: INFO: created pod pod-service-account-nomountsa Aug 21 23:39:22.282: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 21 23:39:22.347: INFO: created pod pod-service-account-defaultsa-mountspec Aug 21 23:39:22.347: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 21 23:39:22.370: INFO: created pod pod-service-account-mountsa-mountspec Aug 21 23:39:22.370: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 21 23:39:22.406: INFO: created pod pod-service-account-nomountsa-mountspec Aug 21 23:39:22.406: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 21 23:39:22.432: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 21 23:39:22.432: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 21 23:39:22.531: INFO: created pod pod-service-account-mountsa-nomountspec Aug 21 23:39:22.531: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 21 23:39:22.537: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 21 23:39:22.537: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:39:22.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6066" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":76,"skipped":1242,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:39:22.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d5edc235-c51b-4e74-b301-f5d325f50fdb STEP: Creating a pod to test consume configMaps Aug 21 23:39:22.826: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b" in namespace "projected-9961" to be "success or failure" Aug 21 23:39:22.838: INFO: Pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.857473ms Aug 21 23:39:25.126: INFO: Pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300294439s Aug 21 23:39:27.204: INFO: Pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377605803s Aug 21 23:39:29.434: INFO: Pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.6080109s Aug 21 23:39:31.548: INFO: Pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721816813s Aug 21 23:39:33.573: INFO: Pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.746761537s Aug 21 23:39:35.640: INFO: Pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b": Phase="Running", Reason="", readiness=true. Elapsed: 12.81396261s Aug 21 23:39:37.643: INFO: Pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.817200647s STEP: Saw pod success Aug 21 23:39:37.643: INFO: Pod "pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b" satisfied condition "success or failure" Aug 21 23:39:37.645: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b container projected-configmap-volume-test: STEP: delete the pod Aug 21 23:39:37.679: INFO: Waiting for pod pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b to disappear Aug 21 23:39:37.689: INFO: Pod pod-projected-configmaps-2b295c65-3dff-4715-ad64-b33ed122929b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:39:37.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9961" for this suite. • [SLOW TEST:15.046 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:39:37.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 21 23:39:37.747: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:39:45.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8490" for this suite. • [SLOW TEST:7.971 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":78,"skipped":1280,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:39:45.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 21 23:39:45.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a55925a-0e2d-479e-9462-7e6045f3c079" in namespace "projected-3883" to be "success or failure" Aug 21 23:39:45.881: INFO: Pod "downwardapi-volume-3a55925a-0e2d-479e-9462-7e6045f3c079": Phase="Pending", Reason="", readiness=false. Elapsed: 3.055363ms Aug 21 23:39:47.932: INFO: Pod "downwardapi-volume-3a55925a-0e2d-479e-9462-7e6045f3c079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054116294s Aug 21 23:39:49.998: INFO: Pod "downwardapi-volume-3a55925a-0e2d-479e-9462-7e6045f3c079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120638804s STEP: Saw pod success Aug 21 23:39:49.998: INFO: Pod "downwardapi-volume-3a55925a-0e2d-479e-9462-7e6045f3c079" satisfied condition "success or failure" Aug 21 23:39:50.001: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3a55925a-0e2d-479e-9462-7e6045f3c079 container client-container: STEP: delete the pod Aug 21 23:39:50.039: INFO: Waiting for pod downwardapi-volume-3a55925a-0e2d-479e-9462-7e6045f3c079 to disappear Aug 21 23:39:50.048: INFO: Pod downwardapi-volume-3a55925a-0e2d-479e-9462-7e6045f3c079 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 21 23:39:50.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3883" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1293,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 21 23:39:50.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 21 23:39:50.130: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-9c18cf05-850c-4da0-a969-47d9b28aa050 in namespace container-probe-1246
Aug 21 23:39:54.349: INFO: Started pod liveness-9c18cf05-850c-4da0-a969-47d9b28aa050 in namespace container-probe-1246
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 23:39:54.351: INFO: Initial restart count of pod liveness-9c18cf05-850c-4da0-a969-47d9b28aa050 is 0
Aug 21 23:40:06.385: INFO: Restart count of pod container-probe-1246/liveness-9c18cf05-850c-4da0-a969-47d9b28aa050 is now 1 (12.033168016s elapsed)
Aug 21 23:40:26.934: INFO: Restart count of pod container-probe-1246/liveness-9c18cf05-850c-4da0-a969-47d9b28aa050 is now 2 (32.583014446s elapsed)
Aug 21 23:40:46.973: INFO: Restart count of pod container-probe-1246/liveness-9c18cf05-850c-4da0-a969-47d9b28aa050 is now 3 (52.622057676s elapsed)
Aug 21 23:41:07.055: INFO: Restart count of pod container-probe-1246/liveness-9c18cf05-850c-4da0-a969-47d9b28aa050 is now 4 (1m12.703930386s elapsed)
Aug 21 23:42:17.197: INFO: Restart count of pod container-probe-1246/liveness-9c18cf05-850c-4da0-a969-47d9b28aa050 is now 5 (2m22.845828266s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:42:17.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1246" for this suite.

• [SLOW TEST:147.223 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1326,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:42:17.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4114.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4114.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4114.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4114.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4114.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4114.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4114.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4114.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4114.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4114.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.86.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.86.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.86.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.86.184_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4114.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4114.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4114.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4114.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4114.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4114.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4114.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4114.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4114.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4114.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4114.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.86.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.86.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.86.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.86.184_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 23:42:26.387: INFO: Unable to read wheezy_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:26.390: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:26.393: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:26.397: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:26.418: INFO: Unable to read jessie_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:26.422: INFO: Unable to read jessie_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:26.425: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:26.428: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:26.445: INFO: Lookups using dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a failed for: [wheezy_udp@dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_udp@dns-test-service.dns-4114.svc.cluster.local jessie_tcp@dns-test-service.dns-4114.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local]

Aug 21 23:42:31.450: INFO: Unable to read wheezy_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:31.454: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:31.458: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:31.461: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:31.484: INFO: Unable to read jessie_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:31.487: INFO: Unable to read jessie_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:31.489: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:31.492: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:31.508: INFO: Lookups using dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a failed for: [wheezy_udp@dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_udp@dns-test-service.dns-4114.svc.cluster.local jessie_tcp@dns-test-service.dns-4114.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local]

Aug 21 23:42:36.450: INFO: Unable to read wheezy_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:36.454: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:36.457: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:36.460: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:36.479: INFO: Unable to read jessie_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:36.482: INFO: Unable to read jessie_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:36.485: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:36.487: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:36.502: INFO: Lookups using dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a failed for: [wheezy_udp@dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_udp@dns-test-service.dns-4114.svc.cluster.local jessie_tcp@dns-test-service.dns-4114.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local]

Aug 21 23:42:41.449: INFO: Unable to read wheezy_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:41.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:41.456: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:41.459: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:41.480: INFO: Unable to read jessie_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:41.483: INFO: Unable to read jessie_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:41.492: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:41.496: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:41.512: INFO: Lookups using dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a failed for: [wheezy_udp@dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_udp@dns-test-service.dns-4114.svc.cluster.local jessie_tcp@dns-test-service.dns-4114.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local]

Aug 21 23:42:46.449: INFO: Unable to read wheezy_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:46.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:46.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:46.458: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:46.477: INFO: Unable to read jessie_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:46.479: INFO: Unable to read jessie_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:46.482: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:46.484: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:46.517: INFO: Lookups using dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a failed for: [wheezy_udp@dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_udp@dns-test-service.dns-4114.svc.cluster.local jessie_tcp@dns-test-service.dns-4114.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local]

Aug 21 23:42:51.527: INFO: Unable to read wheezy_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:51.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:51.569: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:51.572: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:51.700: INFO: Unable to read jessie_udp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:51.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:51.706: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:51.708: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local from pod dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a: the server could not find the requested resource (get pods dns-test-1abe6866-e189-4eee-af21-493f74d2224a)
Aug 21 23:42:51.751: INFO: Lookups using dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a failed for: [wheezy_udp@dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@dns-test-service.dns-4114.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_udp@dns-test-service.dns-4114.svc.cluster.local jessie_tcp@dns-test-service.dns-4114.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4114.svc.cluster.local]

Aug 21 23:42:56.693: INFO: DNS probes using dns-4114/dns-test-1abe6866-e189-4eee-af21-493f74d2224a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:42:57.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4114" for this suite.

• [SLOW TEST:40.572 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":82,"skipped":1330,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:42:58.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 21 23:42:58.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-452'
Aug 21 23:42:58.727: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 21 23:42:58.727: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 21 23:42:58.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-452'
Aug 21 23:42:58.997: INFO: stderr: ""
Aug 21 23:42:58.997: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:42:58.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-452" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":83,"skipped":1351,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:42:59.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-33b3b6ca-b025-4284-ad6e-774e5abebcc5
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:43:09.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9005" for this suite.

• [SLOW TEST:10.781 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1357,"failed":0}
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:43:09.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-4410/configmap-test-0a3d741a-d9f6-4867-8909-9500f47f7aaa
STEP: Creating a pod to test consume configMaps
Aug 21 23:43:09.922: INFO: Waiting up to 5m0s for pod "pod-configmaps-68e6987a-0b00-4f84-adbf-2756be3f83cc" in namespace "configmap-4410" to be "success or failure"
Aug 21 23:43:09.925: INFO: Pod "pod-configmaps-68e6987a-0b00-4f84-adbf-2756be3f83cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.963799ms
Aug 21 23:43:12.161: INFO: Pod "pod-configmaps-68e6987a-0b00-4f84-adbf-2756be3f83cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23911794s
Aug 21 23:43:14.165: INFO: Pod "pod-configmaps-68e6987a-0b00-4f84-adbf-2756be3f83cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.242492654s
STEP: Saw pod success
Aug 21 23:43:14.165: INFO: Pod "pod-configmaps-68e6987a-0b00-4f84-adbf-2756be3f83cc" satisfied condition "success or failure"
Aug 21 23:43:14.167: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-68e6987a-0b00-4f84-adbf-2756be3f83cc container env-test: 
STEP: delete the pod
Aug 21 23:43:14.367: INFO: Waiting for pod pod-configmaps-68e6987a-0b00-4f84-adbf-2756be3f83cc to disappear
Aug 21 23:43:14.450: INFO: Pod pod-configmaps-68e6987a-0b00-4f84-adbf-2756be3f83cc no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:43:14.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4410" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1365,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:43:14.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:43:25.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2847" for this suite.

• [SLOW TEST:11.406 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":86,"skipped":1368,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:43:25.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 21 23:43:30.693: INFO: Successfully updated pod "pod-update-e24f5148-ea63-4ba1-82a1-dbb6781148d8"
STEP: verifying the updated pod is in kubernetes
Aug 21 23:43:30.710: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:43:30.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1788" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1436,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:43:30.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Aug 21 23:43:30.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 21 23:43:31.060: INFO: stderr: ""
Aug 21 23:43:31.060: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:43:31.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2482" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":88,"skipped":1447,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:43:31.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-1230/configmap-test-9b6bb26b-04ed-48a9-92bb-b8b4792fa534
STEP: Creating a pod to test consume configMaps
Aug 21 23:43:31.317: INFO: Waiting up to 5m0s for pod "pod-configmaps-2c9d28d6-2003-4839-a4c7-38a391b85cbb" in namespace "configmap-1230" to be "success or failure"
Aug 21 23:43:31.354: INFO: Pod "pod-configmaps-2c9d28d6-2003-4839-a4c7-38a391b85cbb": Phase="Pending", Reason="", readiness=false. Elapsed: 37.278809ms
Aug 21 23:43:33.371: INFO: Pod "pod-configmaps-2c9d28d6-2003-4839-a4c7-38a391b85cbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05406186s
Aug 21 23:43:35.383: INFO: Pod "pod-configmaps-2c9d28d6-2003-4839-a4c7-38a391b85cbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065874298s
Aug 21 23:43:37.387: INFO: Pod "pod-configmaps-2c9d28d6-2003-4839-a4c7-38a391b85cbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069824411s
STEP: Saw pod success
Aug 21 23:43:37.387: INFO: Pod "pod-configmaps-2c9d28d6-2003-4839-a4c7-38a391b85cbb" satisfied condition "success or failure"
Aug 21 23:43:37.389: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-2c9d28d6-2003-4839-a4c7-38a391b85cbb container env-test: 
STEP: delete the pod
Aug 21 23:43:37.492: INFO: Waiting for pod pod-configmaps-2c9d28d6-2003-4839-a4c7-38a391b85cbb to disappear
Aug 21 23:43:37.526: INFO: Pod pod-configmaps-2c9d28d6-2003-4839-a4c7-38a391b85cbb no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:43:37.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1230" for this suite.

• [SLOW TEST:6.449 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1461,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:43:37.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-05b364d4-93fd-473b-a8be-ee2db43e44ba in namespace container-probe-7278
Aug 21 23:43:41.679: INFO: Started pod liveness-05b364d4-93fd-473b-a8be-ee2db43e44ba in namespace container-probe-7278
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 23:43:41.694: INFO: Initial restart count of pod liveness-05b364d4-93fd-473b-a8be-ee2db43e44ba is 0
Aug 21 23:44:04.198: INFO: Restart count of pod container-probe-7278/liveness-05b364d4-93fd-473b-a8be-ee2db43e44ba is now 1 (22.504794805s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:44:04.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7278" for this suite.

• [SLOW TEST:26.701 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1465,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:44:04.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-94d274ad-c0cc-4192-a6c3-df4ff9c61679
STEP: Creating a pod to test consume secrets
Aug 21 23:44:04.320: INFO: Waiting up to 5m0s for pod "pod-secrets-8492e60e-77e7-4e3d-8d5b-dc203fc0f6f8" in namespace "secrets-8015" to be "success or failure"
Aug 21 23:44:04.325: INFO: Pod "pod-secrets-8492e60e-77e7-4e3d-8d5b-dc203fc0f6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553121ms
Aug 21 23:44:06.455: INFO: Pod "pod-secrets-8492e60e-77e7-4e3d-8d5b-dc203fc0f6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134881984s
Aug 21 23:44:08.743: INFO: Pod "pod-secrets-8492e60e-77e7-4e3d-8d5b-dc203fc0f6f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.422501287s
STEP: Saw pod success
Aug 21 23:44:08.743: INFO: Pod "pod-secrets-8492e60e-77e7-4e3d-8d5b-dc203fc0f6f8" satisfied condition "success or failure"
Aug 21 23:44:08.746: INFO: Trying to get logs from node jerma-worker pod pod-secrets-8492e60e-77e7-4e3d-8d5b-dc203fc0f6f8 container secret-volume-test: 
STEP: delete the pod
Aug 21 23:44:09.287: INFO: Waiting for pod pod-secrets-8492e60e-77e7-4e3d-8d5b-dc203fc0f6f8 to disappear
Aug 21 23:44:09.437: INFO: Pod pod-secrets-8492e60e-77e7-4e3d-8d5b-dc203fc0f6f8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:44:09.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8015" for this suite.

• [SLOW TEST:5.260 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1483,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:44:09.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 23:44:09.823: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4548fb3-6fba-4199-b011-c399f90e475c" in namespace "downward-api-1547" to be "success or failure"
Aug 21 23:44:09.963: INFO: Pod "downwardapi-volume-c4548fb3-6fba-4199-b011-c399f90e475c": Phase="Pending", Reason="", readiness=false. Elapsed: 140.129603ms
Aug 21 23:44:11.967: INFO: Pod "downwardapi-volume-c4548fb3-6fba-4199-b011-c399f90e475c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143891959s
Aug 21 23:44:13.971: INFO: Pod "downwardapi-volume-c4548fb3-6fba-4199-b011-c399f90e475c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147881433s
Aug 21 23:44:15.975: INFO: Pod "downwardapi-volume-c4548fb3-6fba-4199-b011-c399f90e475c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15178574s
STEP: Saw pod success
Aug 21 23:44:15.975: INFO: Pod "downwardapi-volume-c4548fb3-6fba-4199-b011-c399f90e475c" satisfied condition "success or failure"
Aug 21 23:44:15.978: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c4548fb3-6fba-4199-b011-c399f90e475c container client-container: 
STEP: delete the pod
Aug 21 23:44:16.026: INFO: Waiting for pod downwardapi-volume-c4548fb3-6fba-4199-b011-c399f90e475c to disappear
Aug 21 23:44:16.035: INFO: Pod downwardapi-volume-c4548fb3-6fba-4199-b011-c399f90e475c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:44:16.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1547" for this suite.

• [SLOW TEST:6.555 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1487,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:44:16.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-5ed2123e-5973-4226-960a-598d429867d5
STEP: Creating secret with name s-test-opt-upd-1cd817d8-5d59-4446-92b5-8faad4895b69
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5ed2123e-5973-4226-960a-598d429867d5
STEP: Updating secret s-test-opt-upd-1cd817d8-5d59-4446-92b5-8faad4895b69
STEP: Creating secret with name s-test-opt-create-afbbc6b5-54a2-48ff-a0d4-dbbc471da417
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:44:26.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8603" for this suite.

• [SLOW TEST:10.399 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1497,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:44:26.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0821 23:44:38.610327       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 23:44:38.610: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:44:38.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1490" for this suite.

• [SLOW TEST:12.382 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":94,"skipped":1502,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:44:38.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Aug 21 23:44:43.336: INFO: Pod pod-hostip-48a0f42c-55b7-413a-b73a-f3ca177d9e38 has hostIP: 172.18.0.6
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:44:43.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8277" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1523,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:44:43.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 23:44:43.451: INFO: Waiting up to 5m0s for pod "busybox-user-65534-618f02f3-f666-46da-9713-ba5c9f13ebd6" in namespace "security-context-test-9199" to be "success or failure"
Aug 21 23:44:43.462: INFO: Pod "busybox-user-65534-618f02f3-f666-46da-9713-ba5c9f13ebd6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.175619ms
Aug 21 23:44:45.523: INFO: Pod "busybox-user-65534-618f02f3-f666-46da-9713-ba5c9f13ebd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071535319s
Aug 21 23:44:47.546: INFO: Pod "busybox-user-65534-618f02f3-f666-46da-9713-ba5c9f13ebd6": Phase="Running", Reason="", readiness=true. Elapsed: 4.094484163s
Aug 21 23:44:49.624: INFO: Pod "busybox-user-65534-618f02f3-f666-46da-9713-ba5c9f13ebd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172375983s
Aug 21 23:44:49.624: INFO: Pod "busybox-user-65534-618f02f3-f666-46da-9713-ba5c9f13ebd6" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:44:49.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9199" for this suite.

• [SLOW TEST:6.305 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1532,"failed":0}
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:44:49.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4348 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4348;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4348 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4348;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4348.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4348.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4348.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4348.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4348.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4348.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4348.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4348.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4348.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4348.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4348.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 31.215.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.215.31_udp@PTR;check="$$(dig +tcp +noall +answer +search 31.215.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.215.31_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4348 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4348;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4348 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4348;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4348.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4348.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4348.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4348.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4348.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4348.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4348.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4348.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4348.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4348.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4348.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4348.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 31.215.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.215.31_udp@PTR;check="$$(dig +tcp +noall +answer +search 31.215.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.215.31_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 23:44:56.716: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.719: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.722: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.725: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.741: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.763: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.766: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.785: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.787: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.790: INFO: Unable to read jessie_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.793: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.796: INFO: Unable to read jessie_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.798: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.801: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.804: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:44:56.847: INFO: Lookups using dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4348 wheezy_tcp@dns-test-service.dns-4348 wheezy_udp@dns-test-service.dns-4348.svc wheezy_tcp@dns-test-service.dns-4348.svc wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4348 jessie_tcp@dns-test-service.dns-4348 jessie_udp@dns-test-service.dns-4348.svc jessie_tcp@dns-test-service.dns-4348.svc jessie_udp@_http._tcp.dns-test-service.dns-4348.svc jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc]

Aug 21 23:45:01.855: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.858: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.862: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.865: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.867: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.869: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.871: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.899: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.902: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.905: INFO: Unable to read jessie_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.908: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.915: INFO: Unable to read jessie_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.918: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.922: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.925: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:01.942: INFO: Lookups using dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4348 wheezy_tcp@dns-test-service.dns-4348 wheezy_udp@dns-test-service.dns-4348.svc wheezy_tcp@dns-test-service.dns-4348.svc wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4348 jessie_tcp@dns-test-service.dns-4348 jessie_udp@dns-test-service.dns-4348.svc jessie_tcp@dns-test-service.dns-4348.svc jessie_udp@_http._tcp.dns-test-service.dns-4348.svc jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc]

Aug 21 23:45:06.857: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.860: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.863: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.866: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.870: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.873: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.875: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.879: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.900: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.904: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.907: INFO: Unable to read jessie_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.910: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.914: INFO: Unable to read jessie_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.917: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.920: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.923: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:06.945: INFO: Lookups using dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4348 wheezy_tcp@dns-test-service.dns-4348 wheezy_udp@dns-test-service.dns-4348.svc wheezy_tcp@dns-test-service.dns-4348.svc wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4348 jessie_tcp@dns-test-service.dns-4348 jessie_udp@dns-test-service.dns-4348.svc jessie_tcp@dns-test-service.dns-4348.svc jessie_udp@_http._tcp.dns-test-service.dns-4348.svc jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc]

Aug 21 23:45:11.851: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.855: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.858: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.861: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.863: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.866: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.869: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.872: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.893: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.896: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.899: INFO: Unable to read jessie_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.902: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.905: INFO: Unable to read jessie_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.908: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.914: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.917: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:11.933: INFO: Lookups using dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4348 wheezy_tcp@dns-test-service.dns-4348 wheezy_udp@dns-test-service.dns-4348.svc wheezy_tcp@dns-test-service.dns-4348.svc wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4348 jessie_tcp@dns-test-service.dns-4348 jessie_udp@dns-test-service.dns-4348.svc jessie_tcp@dns-test-service.dns-4348.svc jessie_udp@_http._tcp.dns-test-service.dns-4348.svc jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc]

Aug 21 23:45:16.853: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.859: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.875: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.886: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.905: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.907: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.909: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.911: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.925: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.927: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.929: INFO: Unable to read jessie_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.931: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.933: INFO: Unable to read jessie_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.935: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.937: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.940: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:16.955: INFO: Lookups using dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4348 wheezy_tcp@dns-test-service.dns-4348 wheezy_udp@dns-test-service.dns-4348.svc wheezy_tcp@dns-test-service.dns-4348.svc wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4348 jessie_tcp@dns-test-service.dns-4348 jessie_udp@dns-test-service.dns-4348.svc jessie_tcp@dns-test-service.dns-4348.svc jessie_udp@_http._tcp.dns-test-service.dns-4348.svc jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc]

Aug 21 23:45:21.858: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.887: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.891: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.894: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.897: INFO: Unable to read wheezy_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.900: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.913: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.916: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.937: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.940: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.943: INFO: Unable to read jessie_udp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.946: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348 from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.950: INFO: Unable to read jessie_udp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.953: INFO: Unable to read jessie_tcp@dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.956: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.959: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc from pod dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5: the server could not find the requested resource (get pods dns-test-8a23d979-9802-4294-8eed-089f8a2627f5)
Aug 21 23:45:21.978: INFO: Lookups using dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4348 wheezy_tcp@dns-test-service.dns-4348 wheezy_udp@dns-test-service.dns-4348.svc wheezy_tcp@dns-test-service.dns-4348.svc wheezy_udp@_http._tcp.dns-test-service.dns-4348.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4348.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4348 jessie_tcp@dns-test-service.dns-4348 jessie_udp@dns-test-service.dns-4348.svc jessie_tcp@dns-test-service.dns-4348.svc jessie_udp@_http._tcp.dns-test-service.dns-4348.svc jessie_tcp@_http._tcp.dns-test-service.dns-4348.svc]

Aug 21 23:45:27.580: INFO: DNS probes using dns-4348/dns-test-8a23d979-9802-4294-8eed-089f8a2627f5 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:45:29.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4348" for this suite.

• [SLOW TEST:39.957 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":97,"skipped":1532,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:45:29.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-21a26679-44a3-4573-ad57-8a5d16e98291
STEP: Creating a pod to test consume secrets
Aug 21 23:45:30.187: INFO: Waiting up to 5m0s for pod "pod-secrets-e3eb5412-d0c7-4e60-950f-7b2679782c37" in namespace "secrets-950" to be "success or failure"
Aug 21 23:45:30.192: INFO: Pod "pod-secrets-e3eb5412-d0c7-4e60-950f-7b2679782c37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.949426ms
Aug 21 23:45:32.342: INFO: Pod "pod-secrets-e3eb5412-d0c7-4e60-950f-7b2679782c37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155571616s
Aug 21 23:45:34.346: INFO: Pod "pod-secrets-e3eb5412-d0c7-4e60-950f-7b2679782c37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158924318s
STEP: Saw pod success
Aug 21 23:45:34.346: INFO: Pod "pod-secrets-e3eb5412-d0c7-4e60-950f-7b2679782c37" satisfied condition "success or failure"
Aug 21 23:45:34.348: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e3eb5412-d0c7-4e60-950f-7b2679782c37 container secret-volume-test: 
STEP: delete the pod
Aug 21 23:45:34.524: INFO: Waiting for pod pod-secrets-e3eb5412-d0c7-4e60-950f-7b2679782c37 to disappear
Aug 21 23:45:34.629: INFO: Pod pod-secrets-e3eb5412-d0c7-4e60-950f-7b2679782c37 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:45:34.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-950" for this suite.

• [SLOW TEST:5.030 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1572,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:45:34.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Aug 21 23:45:34.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3720'
Aug 21 23:45:34.963: INFO: stderr: ""
Aug 21 23:45:34.963: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 23:45:34.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3720'
Aug 21 23:45:35.095: INFO: stderr: ""
Aug 21 23:45:35.095: INFO: stdout: "update-demo-nautilus-7gfcd update-demo-nautilus-l4s99 "
Aug 21 23:45:35.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gfcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3720'
Aug 21 23:45:35.182: INFO: stderr: ""
Aug 21 23:45:35.182: INFO: stdout: ""
Aug 21 23:45:35.182: INFO: update-demo-nautilus-7gfcd is created but not running
Aug 21 23:45:40.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3720'
Aug 21 23:45:40.283: INFO: stderr: ""
Aug 21 23:45:40.283: INFO: stdout: "update-demo-nautilus-7gfcd update-demo-nautilus-l4s99 "
Aug 21 23:45:40.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gfcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3720'
Aug 21 23:45:40.371: INFO: stderr: ""
Aug 21 23:45:40.371: INFO: stdout: "true"
Aug 21 23:45:40.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gfcd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3720'
Aug 21 23:45:40.469: INFO: stderr: ""
Aug 21 23:45:40.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 23:45:40.469: INFO: validating pod update-demo-nautilus-7gfcd
Aug 21 23:45:40.473: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 23:45:40.473: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 23:45:40.473: INFO: update-demo-nautilus-7gfcd is verified up and running
Aug 21 23:45:40.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4s99 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3720'
Aug 21 23:45:40.569: INFO: stderr: ""
Aug 21 23:45:40.569: INFO: stdout: "true"
Aug 21 23:45:40.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4s99 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3720'
Aug 21 23:45:40.659: INFO: stderr: ""
Aug 21 23:45:40.659: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 23:45:40.659: INFO: validating pod update-demo-nautilus-l4s99
Aug 21 23:45:40.662: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 23:45:40.662: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 23:45:40.662: INFO: update-demo-nautilus-l4s99 is verified up and running
STEP: rolling-update to new replication controller
Aug 21 23:45:40.664: INFO: scanned /root for discovery docs: 
Aug 21 23:45:40.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3720'
Aug 21 23:46:03.312: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 21 23:46:03.312: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 23:46:03.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3720'
Aug 21 23:46:03.434: INFO: stderr: ""
Aug 21 23:46:03.434: INFO: stdout: "update-demo-kitten-2k9qc update-demo-kitten-4dkkf "
Aug 21 23:46:03.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2k9qc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3720'
Aug 21 23:46:03.534: INFO: stderr: ""
Aug 21 23:46:03.534: INFO: stdout: "true"
Aug 21 23:46:03.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2k9qc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3720'
Aug 21 23:46:03.628: INFO: stderr: ""
Aug 21 23:46:03.628: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 21 23:46:03.628: INFO: validating pod update-demo-kitten-2k9qc
Aug 21 23:46:03.632: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 21 23:46:03.632: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 21 23:46:03.632: INFO: update-demo-kitten-2k9qc is verified up and running
Aug 21 23:46:03.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4dkkf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3720'
Aug 21 23:46:03.724: INFO: stderr: ""
Aug 21 23:46:03.724: INFO: stdout: "true"
Aug 21 23:46:03.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4dkkf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3720'
Aug 21 23:46:03.810: INFO: stderr: ""
Aug 21 23:46:03.810: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 21 23:46:03.810: INFO: validating pod update-demo-kitten-4dkkf
Aug 21 23:46:03.814: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 21 23:46:03.814: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 21 23:46:03.814: INFO: update-demo-kitten-4dkkf is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:46:03.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3720" for this suite.

• [SLOW TEST:29.181 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":99,"skipped":1589,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:46:03.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 21 23:46:03.900: INFO: Waiting up to 5m0s for pod "pod-2fc52248-15cc-49b0-9697-b23d6f0cadc8" in namespace "emptydir-7468" to be "success or failure"
Aug 21 23:46:03.904: INFO: Pod "pod-2fc52248-15cc-49b0-9697-b23d6f0cadc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397065ms
Aug 21 23:46:05.908: INFO: Pod "pod-2fc52248-15cc-49b0-9697-b23d6f0cadc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008673408s
Aug 21 23:46:07.913: INFO: Pod "pod-2fc52248-15cc-49b0-9697-b23d6f0cadc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013387973s
STEP: Saw pod success
Aug 21 23:46:07.913: INFO: Pod "pod-2fc52248-15cc-49b0-9697-b23d6f0cadc8" satisfied condition "success or failure"
Aug 21 23:46:07.916: INFO: Trying to get logs from node jerma-worker pod pod-2fc52248-15cc-49b0-9697-b23d6f0cadc8 container test-container: 
STEP: delete the pod
Aug 21 23:46:07.932: INFO: Waiting for pod pod-2fc52248-15cc-49b0-9697-b23d6f0cadc8 to disappear
Aug 21 23:46:07.943: INFO: Pod pod-2fc52248-15cc-49b0-9697-b23d6f0cadc8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:46:07.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7468" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1600,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:46:07.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 21 23:46:08.066: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-979 /api/v1/namespaces/watch-979/configmaps/e2e-watch-test-watch-closed 6ba5f191-5967-485a-bd0d-d60b9fa4ed7f 2285661 0 2020-08-21 23:46:08 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 21 23:46:08.067: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-979 /api/v1/namespaces/watch-979/configmaps/e2e-watch-test-watch-closed 6ba5f191-5967-485a-bd0d-d60b9fa4ed7f 2285662 0 2020-08-21 23:46:08 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 21 23:46:08.083: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-979 /api/v1/namespaces/watch-979/configmaps/e2e-watch-test-watch-closed 6ba5f191-5967-485a-bd0d-d60b9fa4ed7f 2285663 0 2020-08-21 23:46:08 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 21 23:46:08.083: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-979 /api/v1/namespaces/watch-979/configmaps/e2e-watch-test-watch-closed 6ba5f191-5967-485a-bd0d-d60b9fa4ed7f 2285664 0 2020-08-21 23:46:08 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:46:08.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-979" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":101,"skipped":1604,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:46:08.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 23:46:08.308: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"84893dc9-4cc6-4930-a893-bf949af81fb1", Controller:(*bool)(0xc0029e10d2), BlockOwnerDeletion:(*bool)(0xc0029e10d3)}}
Aug 21 23:46:08.360: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"00b4e8a7-6e6d-4f70-90f2-067a297aaf40", Controller:(*bool)(0xc002dd2be2), BlockOwnerDeletion:(*bool)(0xc002dd2be3)}}
Aug 21 23:46:08.375: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3396b49f-7f97-4149-be26-da9ac93a9f06", Controller:(*bool)(0xc0029e1282), BlockOwnerDeletion:(*bool)(0xc0029e1283)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:46:13.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2916" for this suite.

• [SLOW TEST:5.377 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":102,"skipped":1604,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:46:13.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-c8q6
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 23:46:13.656: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-c8q6" in namespace "subpath-2691" to be "success or failure"
Aug 21 23:46:13.702: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.078518ms
Aug 21 23:46:16.073: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417857538s
Aug 21 23:46:18.077: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 4.42181253s
Aug 21 23:46:20.080: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 6.424768853s
Aug 21 23:46:22.085: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 8.429284391s
Aug 21 23:46:24.089: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 10.433318024s
Aug 21 23:46:26.092: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 12.436465469s
Aug 21 23:46:28.097: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 14.441123838s
Aug 21 23:46:30.101: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 16.444938977s
Aug 21 23:46:32.106: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 18.450143841s
Aug 21 23:46:34.110: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 20.453989198s
Aug 21 23:46:36.113: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Running", Reason="", readiness=true. Elapsed: 22.457851663s
Aug 21 23:46:38.133: INFO: Pod "pod-subpath-test-projected-c8q6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.477709899s
STEP: Saw pod success
Aug 21 23:46:38.133: INFO: Pod "pod-subpath-test-projected-c8q6" satisfied condition "success or failure"
Aug 21 23:46:38.136: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-c8q6 container test-container-subpath-projected-c8q6: 
STEP: delete the pod
Aug 21 23:46:38.168: INFO: Waiting for pod pod-subpath-test-projected-c8q6 to disappear
Aug 21 23:46:38.171: INFO: Pod pod-subpath-test-projected-c8q6 no longer exists
STEP: Deleting pod pod-subpath-test-projected-c8q6
Aug 21 23:46:38.171: INFO: Deleting pod "pod-subpath-test-projected-c8q6" in namespace "subpath-2691"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:46:38.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2691" for this suite.

• [SLOW TEST:24.691 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":103,"skipped":1610,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:46:38.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 23:46:38.829: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 23:46:40.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650398, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650398, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650398, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650398, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 23:46:42.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650398, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650398, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650398, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650398, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 23:46:45.867: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:46:47.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4842" for this suite.
STEP: Destroying namespace "webhook-4842-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.606 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":104,"skipped":1668,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:46:49.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 21 23:46:51.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:47:07.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7609" for this suite.

• [SLOW TEST:17.924 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":105,"skipped":1669,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:47:07.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9
Aug 21 23:47:07.795: INFO: Pod name my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9: Found 0 pods out of 1
Aug 21 23:47:12.798: INFO: Pod name my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9: Found 1 pods out of 1
Aug 21 23:47:12.798: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9" are running
Aug 21 23:47:14.804: INFO: Pod "my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9-f9vn8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 23:47:07 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 23:47:07 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 23:47:07 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 23:47:07 +0000 UTC Reason: Message:}])
Aug 21 23:47:14.805: INFO: Trying to dial the pod
Aug 21 23:47:19.815: INFO: Controller my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9: Got expected result from replica 1 [my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9-f9vn8]: "my-hostname-basic-56ddeb07-b747-4506-b910-6ef84de402b9-f9vn8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:47:19.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3561" for this suite.

• [SLOW TEST:12.112 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":106,"skipped":1677,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:47:19.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9215
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9215
STEP: Creating statefulset with conflicting port in namespace statefulset-9215
STEP: Waiting until pod test-pod will start running in namespace statefulset-9215
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9215
Aug 21 23:47:24.367: INFO: Observed stateful pod in namespace: statefulset-9215, name: ss-0, uid: c11ca80c-02c3-4d3b-86bd-aaef01a9c7f4, status phase: Pending. Waiting for statefulset controller to delete.
Aug 21 23:47:24.747: INFO: Observed stateful pod in namespace: statefulset-9215, name: ss-0, uid: c11ca80c-02c3-4d3b-86bd-aaef01a9c7f4, status phase: Failed. Waiting for statefulset controller to delete.
Aug 21 23:47:24.846: INFO: Observed stateful pod in namespace: statefulset-9215, name: ss-0, uid: c11ca80c-02c3-4d3b-86bd-aaef01a9c7f4, status phase: Failed. Waiting for statefulset controller to delete.
Aug 21 23:47:24.850: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9215
STEP: Removing pod with conflicting port in namespace statefulset-9215
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9215 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 21 23:47:30.961: INFO: Deleting all statefulset in ns statefulset-9215
Aug 21 23:47:30.965: INFO: Scaling statefulset ss to 0
Aug 21 23:47:51.002: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 23:47:51.005: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:47:51.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9215" for this suite.

• [SLOW TEST:31.202 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":107,"skipped":1685,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:47:51.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-9a2dbd1f-ee0d-4fd5-bbe5-18ac40c84004
STEP: Creating a pod to test consume secrets
Aug 21 23:47:51.090: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1b39054-e341-4078-b9e0-6ef627ca3ba3" in namespace "projected-6537" to be "success or failure"
Aug 21 23:47:51.134: INFO: Pod "pod-projected-secrets-e1b39054-e341-4078-b9e0-6ef627ca3ba3": Phase="Pending", Reason="", readiness=false. Elapsed: 44.114543ms
Aug 21 23:47:53.200: INFO: Pod "pod-projected-secrets-e1b39054-e341-4078-b9e0-6ef627ca3ba3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110016461s
Aug 21 23:47:55.204: INFO: Pod "pod-projected-secrets-e1b39054-e341-4078-b9e0-6ef627ca3ba3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113810928s
STEP: Saw pod success
Aug 21 23:47:55.204: INFO: Pod "pod-projected-secrets-e1b39054-e341-4078-b9e0-6ef627ca3ba3" satisfied condition "success or failure"
Aug 21 23:47:55.207: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e1b39054-e341-4078-b9e0-6ef627ca3ba3 container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 23:47:55.250: INFO: Waiting for pod pod-projected-secrets-e1b39054-e341-4078-b9e0-6ef627ca3ba3 to disappear
Aug 21 23:47:55.255: INFO: Pod pod-projected-secrets-e1b39054-e341-4078-b9e0-6ef627ca3ba3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:47:55.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6537" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1689,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:47:55.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 21 23:47:55.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8576'
Aug 21 23:47:58.581: INFO: stderr: ""
Aug 21 23:47:58.581: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 21 23:47:59.645: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:47:59.645: INFO: Found 0 / 1
Aug 21 23:48:00.584: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:48:00.584: INFO: Found 0 / 1
Aug 21 23:48:01.586: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:48:01.586: INFO: Found 1 / 1
Aug 21 23:48:01.586: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 21 23:48:01.589: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:48:01.589: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 21 23:48:01.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-89tpk --namespace=kubectl-8576 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 21 23:48:01.698: INFO: stderr: ""
Aug 21 23:48:01.698: INFO: stdout: "pod/agnhost-master-89tpk patched\n"
STEP: checking annotations
Aug 21 23:48:01.737: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:48:01.737: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:48:01.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8576" for this suite.

• [SLOW TEST:6.482 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":109,"skipped":1742,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:48:01.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 23:48:01.817: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13e40ab3-f25e-45ec-8223-5120efe9714c" in namespace "downward-api-5946" to be "success or failure"
Aug 21 23:48:01.826: INFO: Pod "downwardapi-volume-13e40ab3-f25e-45ec-8223-5120efe9714c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.569853ms
Aug 21 23:48:03.830: INFO: Pod "downwardapi-volume-13e40ab3-f25e-45ec-8223-5120efe9714c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013619972s
Aug 21 23:48:05.899: INFO: Pod "downwardapi-volume-13e40ab3-f25e-45ec-8223-5120efe9714c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081956566s
STEP: Saw pod success
Aug 21 23:48:05.899: INFO: Pod "downwardapi-volume-13e40ab3-f25e-45ec-8223-5120efe9714c" satisfied condition "success or failure"
Aug 21 23:48:05.916: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-13e40ab3-f25e-45ec-8223-5120efe9714c container client-container: 
STEP: delete the pod
Aug 21 23:48:05.942: INFO: Waiting for pod downwardapi-volume-13e40ab3-f25e-45ec-8223-5120efe9714c to disappear
Aug 21 23:48:05.952: INFO: Pod downwardapi-volume-13e40ab3-f25e-45ec-8223-5120efe9714c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:48:05.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5946" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1743,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:48:05.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 21 23:48:06.041: INFO: Waiting up to 5m0s for pod "pod-2d72f2a8-b127-4c17-b878-6ca1cf78247b" in namespace "emptydir-8571" to be "success or failure"
Aug 21 23:48:06.048: INFO: Pod "pod-2d72f2a8-b127-4c17-b878-6ca1cf78247b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.292759ms
Aug 21 23:48:08.222: INFO: Pod "pod-2d72f2a8-b127-4c17-b878-6ca1cf78247b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181451965s
Aug 21 23:48:10.226: INFO: Pod "pod-2d72f2a8-b127-4c17-b878-6ca1cf78247b": Phase="Running", Reason="", readiness=true. Elapsed: 4.185334314s
Aug 21 23:48:12.230: INFO: Pod "pod-2d72f2a8-b127-4c17-b878-6ca1cf78247b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188847998s
STEP: Saw pod success
Aug 21 23:48:12.230: INFO: Pod "pod-2d72f2a8-b127-4c17-b878-6ca1cf78247b" satisfied condition "success or failure"
Aug 21 23:48:12.232: INFO: Trying to get logs from node jerma-worker pod pod-2d72f2a8-b127-4c17-b878-6ca1cf78247b container test-container: 
STEP: delete the pod
Aug 21 23:48:12.263: INFO: Waiting for pod pod-2d72f2a8-b127-4c17-b878-6ca1cf78247b to disappear
Aug 21 23:48:12.267: INFO: Pod pod-2d72f2a8-b127-4c17-b878-6ca1cf78247b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:48:12.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8571" for this suite.

• [SLOW TEST:6.316 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:48:12.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 23:48:16.450: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:48:16.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2308" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1780,"failed":0}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:48:16.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 21 23:48:24.636: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 21 23:48:24.659: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 21 23:48:26.659: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 21 23:48:26.663: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 21 23:48:28.659: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 21 23:48:28.663: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:48:28.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9376" for this suite.

• [SLOW TEST:12.153 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1784,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:48:28.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 23:48:28.771: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:48:29.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5339" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":114,"skipped":1800,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:48:29.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3640
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3640
STEP: creating replication controller externalsvc in namespace services-3640
I0821 23:48:29.978407       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3640, replica count: 2
I0821 23:48:33.028891       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 23:48:36.029207       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 21 23:48:36.069: INFO: Creating new exec pod
Aug 21 23:48:40.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3640 execpodmkjp7 -- /bin/sh -x -c nslookup clusterip-service'
Aug 21 23:48:40.324: INFO: stderr: "I0821 23:48:40.233639    2823 log.go:172] (0xc0000f71e0) (0xc000685b80) Create stream\nI0821 23:48:40.233697    2823 log.go:172] (0xc0000f71e0) (0xc000685b80) Stream added, broadcasting: 1\nI0821 23:48:40.236911    2823 log.go:172] (0xc0000f71e0) Reply frame received for 1\nI0821 23:48:40.236966    2823 log.go:172] (0xc0000f71e0) (0xc0005c0000) Create stream\nI0821 23:48:40.236980    2823 log.go:172] (0xc0000f71e0) (0xc0005c0000) Stream added, broadcasting: 3\nI0821 23:48:40.238085    2823 log.go:172] (0xc0000f71e0) Reply frame received for 3\nI0821 23:48:40.238122    2823 log.go:172] (0xc0000f71e0) (0xc000284000) Create stream\nI0821 23:48:40.238134    2823 log.go:172] (0xc0000f71e0) (0xc000284000) Stream added, broadcasting: 5\nI0821 23:48:40.239080    2823 log.go:172] (0xc0000f71e0) Reply frame received for 5\nI0821 23:48:40.303181    2823 log.go:172] (0xc0000f71e0) Data frame received for 5\nI0821 23:48:40.303210    2823 log.go:172] (0xc000284000) (5) Data frame handling\nI0821 23:48:40.303238    2823 log.go:172] (0xc000284000) (5) Data frame sent\n+ nslookup clusterip-service\nI0821 23:48:40.311647    2823 log.go:172] (0xc0000f71e0) Data frame received for 3\nI0821 23:48:40.311684    2823 log.go:172] (0xc0005c0000) (3) Data frame handling\nI0821 23:48:40.311718    2823 log.go:172] (0xc0005c0000) (3) Data frame sent\nI0821 23:48:40.313078    2823 log.go:172] (0xc0000f71e0) Data frame received for 3\nI0821 23:48:40.313106    2823 log.go:172] (0xc0005c0000) (3) Data frame handling\nI0821 23:48:40.313126    2823 log.go:172] (0xc0005c0000) (3) Data frame sent\nI0821 23:48:40.313550    2823 log.go:172] (0xc0000f71e0) Data frame received for 3\nI0821 23:48:40.313596    2823 log.go:172] (0xc0005c0000) (3) Data frame handling\nI0821 23:48:40.313631    2823 log.go:172] (0xc0000f71e0) Data frame received for 5\nI0821 23:48:40.313652    2823 log.go:172] (0xc000284000) (5) Data frame handling\nI0821 23:48:40.315553    2823 log.go:172] (0xc0000f71e0) Data frame received for 1\nI0821 23:48:40.315577    2823 log.go:172] (0xc000685b80) (1) Data frame handling\nI0821 23:48:40.315604    2823 log.go:172] (0xc000685b80) (1) Data frame sent\nI0821 23:48:40.315624    2823 log.go:172] (0xc0000f71e0) (0xc000685b80) Stream removed, broadcasting: 1\nI0821 23:48:40.315772    2823 log.go:172] (0xc0000f71e0) Go away received\nI0821 23:48:40.316124    2823 log.go:172] (0xc0000f71e0) (0xc000685b80) Stream removed, broadcasting: 1\nI0821 23:48:40.316147    2823 log.go:172] (0xc0000f71e0) (0xc0005c0000) Stream removed, broadcasting: 3\nI0821 23:48:40.316158    2823 log.go:172] (0xc0000f71e0) (0xc000284000) Stream removed, broadcasting: 5\n"
Aug 21 23:48:40.324: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3640.svc.cluster.local\tcanonical name = externalsvc.services-3640.svc.cluster.local.\nName:\texternalsvc.services-3640.svc.cluster.local\nAddress: 10.96.126.241\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3640, will wait for the garbage collector to delete the pods
Aug 21 23:48:40.383: INFO: Deleting ReplicationController externalsvc took: 6.177338ms
Aug 21 23:48:40.684: INFO: Terminating ReplicationController externalsvc pods took: 300.253785ms
Aug 21 23:48:51.809: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:48:51.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3640" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.019 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":115,"skipped":1803,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:48:51.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 23:48:51.960: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f6b2637-2923-4937-823a-b239db1bb983" in namespace "downward-api-4562" to be "success or failure"
Aug 21 23:48:51.974: INFO: Pod "downwardapi-volume-8f6b2637-2923-4937-823a-b239db1bb983": Phase="Pending", Reason="", readiness=false. Elapsed: 14.004435ms
Aug 21 23:48:53.978: INFO: Pod "downwardapi-volume-8f6b2637-2923-4937-823a-b239db1bb983": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017677076s
Aug 21 23:48:55.982: INFO: Pod "downwardapi-volume-8f6b2637-2923-4937-823a-b239db1bb983": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021665791s
STEP: Saw pod success
Aug 21 23:48:55.982: INFO: Pod "downwardapi-volume-8f6b2637-2923-4937-823a-b239db1bb983" satisfied condition "success or failure"
Aug 21 23:48:55.985: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8f6b2637-2923-4937-823a-b239db1bb983 container client-container: 
STEP: delete the pod
Aug 21 23:48:56.003: INFO: Waiting for pod downwardapi-volume-8f6b2637-2923-4937-823a-b239db1bb983 to disappear
Aug 21 23:48:56.022: INFO: Pod downwardapi-volume-8f6b2637-2923-4937-823a-b239db1bb983 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:48:56.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4562" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1815,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:48:56.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-3945
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 23:48:56.135: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 21 23:49:22.251: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.199:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 23:49:22.251: INFO: >>> kubeConfig: /root/.kube/config
I0821 23:49:22.274864       6 log.go:172] (0xc001e84210) (0xc002db66e0) Create stream
I0821 23:49:22.274909       6 log.go:172] (0xc001e84210) (0xc002db66e0) Stream added, broadcasting: 1
I0821 23:49:22.276709       6 log.go:172] (0xc001e84210) Reply frame received for 1
I0821 23:49:22.276863       6 log.go:172] (0xc001e84210) (0xc0028b8960) Create stream
I0821 23:49:22.276888       6 log.go:172] (0xc001e84210) (0xc0028b8960) Stream added, broadcasting: 3
I0821 23:49:22.277913       6 log.go:172] (0xc001e84210) Reply frame received for 3
I0821 23:49:22.277943       6 log.go:172] (0xc001e84210) (0xc0029d0000) Create stream
I0821 23:49:22.277955       6 log.go:172] (0xc001e84210) (0xc0029d0000) Stream added, broadcasting: 5
I0821 23:49:22.278818       6 log.go:172] (0xc001e84210) Reply frame received for 5
I0821 23:49:22.352164       6 log.go:172] (0xc001e84210) Data frame received for 5
I0821 23:49:22.352210       6 log.go:172] (0xc0029d0000) (5) Data frame handling
I0821 23:49:22.352238       6 log.go:172] (0xc001e84210) Data frame received for 3
I0821 23:49:22.352267       6 log.go:172] (0xc0028b8960) (3) Data frame handling
I0821 23:49:22.352299       6 log.go:172] (0xc0028b8960) (3) Data frame sent
I0821 23:49:22.352318       6 log.go:172] (0xc001e84210) Data frame received for 3
I0821 23:49:22.352337       6 log.go:172] (0xc0028b8960) (3) Data frame handling
I0821 23:49:22.354123       6 log.go:172] (0xc001e84210) Data frame received for 1
I0821 23:49:22.354148       6 log.go:172] (0xc002db66e0) (1) Data frame handling
I0821 23:49:22.354180       6 log.go:172] (0xc002db66e0) (1) Data frame sent
I0821 23:49:22.354190       6 log.go:172] (0xc001e84210) (0xc002db66e0) Stream removed, broadcasting: 1
I0821 23:49:22.354218       6 log.go:172] (0xc001e84210) Go away received
I0821 23:49:22.354374       6 log.go:172] (0xc001e84210) (0xc002db66e0) Stream removed, broadcasting: 1
I0821 23:49:22.354432       6 log.go:172] (0xc001e84210) (0xc0028b8960) Stream removed, broadcasting: 3
I0821 23:49:22.354449       6 log.go:172] (0xc001e84210) (0xc0029d0000) Stream removed, broadcasting: 5
Aug 21 23:49:22.354: INFO: Found all expected endpoints: [netserver-0]
Aug 21 23:49:22.381: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.177:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 23:49:22.381: INFO: >>> kubeConfig: /root/.kube/config
I0821 23:49:22.420585       6 log.go:172] (0xc003b2a420) (0xc0029d0460) Create stream
I0821 23:49:22.420623       6 log.go:172] (0xc003b2a420) (0xc0029d0460) Stream added, broadcasting: 1
I0821 23:49:22.426177       6 log.go:172] (0xc003b2a420) Reply frame received for 1
I0821 23:49:22.426229       6 log.go:172] (0xc003b2a420) (0xc002580000) Create stream
I0821 23:49:22.426244       6 log.go:172] (0xc003b2a420) (0xc002580000) Stream added, broadcasting: 3
I0821 23:49:22.428670       6 log.go:172] (0xc003b2a420) Reply frame received for 3
I0821 23:49:22.428702       6 log.go:172] (0xc003b2a420) (0xc002580280) Create stream
I0821 23:49:22.428831       6 log.go:172] (0xc003b2a420) (0xc002580280) Stream added, broadcasting: 5
I0821 23:49:22.430844       6 log.go:172] (0xc003b2a420) Reply frame received for 5
I0821 23:49:22.517491       6 log.go:172] (0xc003b2a420) Data frame received for 3
I0821 23:49:22.517546       6 log.go:172] (0xc002580000) (3) Data frame handling
I0821 23:49:22.517567       6 log.go:172] (0xc002580000) (3) Data frame sent
I0821 23:49:22.517581       6 log.go:172] (0xc003b2a420) Data frame received for 3
I0821 23:49:22.517605       6 log.go:172] (0xc002580000) (3) Data frame handling
I0821 23:49:22.517800       6 log.go:172] (0xc003b2a420) Data frame received for 5
I0821 23:49:22.517821       6 log.go:172] (0xc002580280) (5) Data frame handling
I0821 23:49:22.519672       6 log.go:172] (0xc003b2a420) Data frame received for 1
I0821 23:49:22.519688       6 log.go:172] (0xc0029d0460) (1) Data frame handling
I0821 23:49:22.519703       6 log.go:172] (0xc0029d0460) (1) Data frame sent
I0821 23:49:22.519720       6 log.go:172] (0xc003b2a420) (0xc0029d0460) Stream removed, broadcasting: 1
I0821 23:49:22.519801       6 log.go:172] (0xc003b2a420) (0xc0029d0460) Stream removed, broadcasting: 1
I0821 23:49:22.519820       6 log.go:172] (0xc003b2a420) (0xc002580000) Stream removed, broadcasting: 3
I0821 23:49:22.519927       6 log.go:172] (0xc003b2a420) (0xc002580280) Stream removed, broadcasting: 5
I0821 23:49:22.520078       6 log.go:172] (0xc003b2a420) Go away received
Aug 21 23:49:22.520: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:49:22.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3945" for this suite.

• [SLOW TEST:26.498 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1827,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:49:22.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 23:49:22.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 21 23:49:24.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3747 create -f -'
Aug 21 23:49:27.706: INFO: stderr: ""
Aug 21 23:49:27.706: INFO: stdout: "e2e-test-crd-publish-openapi-2086-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 21 23:49:27.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3747 delete e2e-test-crd-publish-openapi-2086-crds test-cr'
Aug 21 23:49:28.380: INFO: stderr: ""
Aug 21 23:49:28.380: INFO: stdout: "e2e-test-crd-publish-openapi-2086-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 21 23:49:28.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3747 apply -f -'
Aug 21 23:49:28.968: INFO: stderr: ""
Aug 21 23:49:28.968: INFO: stdout: "e2e-test-crd-publish-openapi-2086-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 21 23:49:28.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3747 delete e2e-test-crd-publish-openapi-2086-crds test-cr'
Aug 21 23:49:29.381: INFO: stderr: ""
Aug 21 23:49:29.381: INFO: stdout: "e2e-test-crd-publish-openapi-2086-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 21 23:49:29.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2086-crds'
Aug 21 23:49:29.725: INFO: stderr: ""
Aug 21 23:49:29.725: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2086-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:49:32.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3747" for this suite.

• [SLOW TEST:10.144 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":118,"skipped":1829,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:49:32.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 21 23:49:32.811: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8269 /api/v1/namespaces/watch-8269/configmaps/e2e-watch-test-label-changed bf19d4d5-c00c-4fea-82d0-54e5110c4604 2287026 0 2020-08-21 23:49:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 21 23:49:32.811: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8269 /api/v1/namespaces/watch-8269/configmaps/e2e-watch-test-label-changed bf19d4d5-c00c-4fea-82d0-54e5110c4604 2287028 0 2020-08-21 23:49:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 21 23:49:32.811: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8269 /api/v1/namespaces/watch-8269/configmaps/e2e-watch-test-label-changed bf19d4d5-c00c-4fea-82d0-54e5110c4604 2287029 0 2020-08-21 23:49:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 21 23:49:42.837: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8269 /api/v1/namespaces/watch-8269/configmaps/e2e-watch-test-label-changed bf19d4d5-c00c-4fea-82d0-54e5110c4604 2287065 0 2020-08-21 23:49:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 21 23:49:42.837: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8269 /api/v1/namespaces/watch-8269/configmaps/e2e-watch-test-label-changed bf19d4d5-c00c-4fea-82d0-54e5110c4604 2287066 0 2020-08-21 23:49:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 21 23:49:42.837: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8269 /api/v1/namespaces/watch-8269/configmaps/e2e-watch-test-label-changed bf19d4d5-c00c-4fea-82d0-54e5110c4604 2287067 0 2020-08-21 23:49:32 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:49:42.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8269" for this suite.

• [SLOW TEST:10.202 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":119,"skipped":1853,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:49:42.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-96414528-d96f-4c0a-b985-eb2eb549c071
STEP: Creating a pod to test consume secrets
Aug 21 23:49:42.943: INFO: Waiting up to 5m0s for pod "pod-secrets-c9c4ede2-70f9-4da9-bffe-21839012ccd7" in namespace "secrets-2021" to be "success or failure"
Aug 21 23:49:43.010: INFO: Pod "pod-secrets-c9c4ede2-70f9-4da9-bffe-21839012ccd7": Phase="Pending", Reason="", readiness=false. Elapsed: 66.710273ms
Aug 21 23:49:45.014: INFO: Pod "pod-secrets-c9c4ede2-70f9-4da9-bffe-21839012ccd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070735944s
Aug 21 23:49:47.018: INFO: Pod "pod-secrets-c9c4ede2-70f9-4da9-bffe-21839012ccd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074855063s
STEP: Saw pod success
Aug 21 23:49:47.018: INFO: Pod "pod-secrets-c9c4ede2-70f9-4da9-bffe-21839012ccd7" satisfied condition "success or failure"
Aug 21 23:49:47.021: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c9c4ede2-70f9-4da9-bffe-21839012ccd7 container secret-env-test: 
STEP: delete the pod
Aug 21 23:49:47.211: INFO: Waiting for pod pod-secrets-c9c4ede2-70f9-4da9-bffe-21839012ccd7 to disappear
Aug 21 23:49:47.229: INFO: Pod pod-secrets-c9c4ede2-70f9-4da9-bffe-21839012ccd7 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:49:47.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2021" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1912,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:49:47.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Aug 21 23:49:47.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4693'
Aug 21 23:49:47.673: INFO: stderr: ""
Aug 21 23:49:47.673: INFO: stdout: "pod/pause created\n"
Aug 21 23:49:47.673: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 21 23:49:47.673: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4693" to be "running and ready"
Aug 21 23:49:47.678: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.906729ms
Aug 21 23:49:49.682: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009031415s
Aug 21 23:49:51.686: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.012956123s
Aug 21 23:49:51.686: INFO: Pod "pause" satisfied condition "running and ready"
Aug 21 23:49:51.686: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 21 23:49:51.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4693'
Aug 21 23:49:51.787: INFO: stderr: ""
Aug 21 23:49:51.787: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 21 23:49:51.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4693'
Aug 21 23:49:51.876: INFO: stderr: ""
Aug 21 23:49:51.876: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 21 23:49:51.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4693'
Aug 21 23:49:51.980: INFO: stderr: ""
Aug 21 23:49:51.980: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 21 23:49:51.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4693'
Aug 21 23:49:52.081: INFO: stderr: ""
Aug 21 23:49:52.081: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Aug 21 23:49:52.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4693'
Aug 21 23:49:52.244: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 23:49:52.244: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 21 23:49:52.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4693'
Aug 21 23:49:52.333: INFO: stderr: "No resources found in kubectl-4693 namespace.\n"
Aug 21 23:49:52.333: INFO: stdout: ""
Aug 21 23:49:52.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4693 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 23:49:52.668: INFO: stderr: ""
Aug 21 23:49:52.669: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:49:52.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4693" for this suite.

• [SLOW TEST:5.414 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":121,"skipped":1922,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:49:52.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 21 23:49:53.024: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3546 /api/v1/namespaces/watch-3546/configmaps/e2e-watch-test-resource-version c129a035-6dc0-49f3-b475-f1d7753a1b24 2287149 0 2020-08-21 23:49:52 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 21 23:49:53.024: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3546 /api/v1/namespaces/watch-3546/configmaps/e2e-watch-test-resource-version c129a035-6dc0-49f3-b475-f1d7753a1b24 2287151 0 2020-08-21 23:49:52 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:49:53.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3546" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":122,"skipped":1931,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:49:53.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 21 23:49:53.444: INFO: Waiting up to 5m0s for pod "pod-74c9057f-2264-4de6-98c0-f455dd822c73" in namespace "emptydir-2406" to be "success or failure"
Aug 21 23:49:53.470: INFO: Pod "pod-74c9057f-2264-4de6-98c0-f455dd822c73": Phase="Pending", Reason="", readiness=false. Elapsed: 25.961752ms
Aug 21 23:49:55.474: INFO: Pod "pod-74c9057f-2264-4de6-98c0-f455dd822c73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029442201s
Aug 21 23:49:57.477: INFO: Pod "pod-74c9057f-2264-4de6-98c0-f455dd822c73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032801593s
STEP: Saw pod success
Aug 21 23:49:57.477: INFO: Pod "pod-74c9057f-2264-4de6-98c0-f455dd822c73" satisfied condition "success or failure"
Aug 21 23:49:57.479: INFO: Trying to get logs from node jerma-worker2 pod pod-74c9057f-2264-4de6-98c0-f455dd822c73 container test-container: 
STEP: delete the pod
Aug 21 23:49:57.496: INFO: Waiting for pod pod-74c9057f-2264-4de6-98c0-f455dd822c73 to disappear
Aug 21 23:49:57.501: INFO: Pod pod-74c9057f-2264-4de6-98c0-f455dd822c73 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:49:57.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2406" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1944,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:49:57.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-57pj
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 23:49:57.610: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-57pj" in namespace "subpath-1918" to be "success or failure"
Aug 21 23:49:57.668: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Pending", Reason="", readiness=false. Elapsed: 58.259248ms
Aug 21 23:49:59.672: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062688827s
Aug 21 23:50:01.676: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 4.066113475s
Aug 21 23:50:03.680: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 6.070691903s
Aug 21 23:50:05.684: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 8.0741285s
Aug 21 23:50:07.689: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 10.078846618s
Aug 21 23:50:09.693: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 12.083074673s
Aug 21 23:50:11.697: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 14.087613193s
Aug 21 23:50:13.702: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 16.092343734s
Aug 21 23:50:15.706: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 18.096395384s
Aug 21 23:50:17.710: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 20.100316689s
Aug 21 23:50:19.715: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Running", Reason="", readiness=true. Elapsed: 22.104921927s
Aug 21 23:50:21.719: INFO: Pod "pod-subpath-test-configmap-57pj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.108953909s
STEP: Saw pod success
Aug 21 23:50:21.719: INFO: Pod "pod-subpath-test-configmap-57pj" satisfied condition "success or failure"
Aug 21 23:50:21.722: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-57pj container test-container-subpath-configmap-57pj: 
STEP: delete the pod
Aug 21 23:50:21.754: INFO: Waiting for pod pod-subpath-test-configmap-57pj to disappear
Aug 21 23:50:21.762: INFO: Pod pod-subpath-test-configmap-57pj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-57pj
Aug 21 23:50:21.763: INFO: Deleting pod "pod-subpath-test-configmap-57pj" in namespace "subpath-1918"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:50:21.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1918" for this suite.

• [SLOW TEST:24.236 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":124,"skipped":1975,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:50:21.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 23:50:21.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:21.894: INFO: Number of nodes with available pods: 0
Aug 21 23:50:21.894: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:22.898: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:22.901: INFO: Number of nodes with available pods: 0
Aug 21 23:50:22.901: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:23.899: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:23.902: INFO: Number of nodes with available pods: 0
Aug 21 23:50:23.902: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:24.908: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:24.912: INFO: Number of nodes with available pods: 0
Aug 21 23:50:24.912: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:25.899: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:25.920: INFO: Number of nodes with available pods: 2
Aug 21 23:50:25.920: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 21 23:50:25.939: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:25.941: INFO: Number of nodes with available pods: 1
Aug 21 23:50:25.941: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:26.957: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:26.968: INFO: Number of nodes with available pods: 1
Aug 21 23:50:26.968: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:27.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:27.949: INFO: Number of nodes with available pods: 1
Aug 21 23:50:27.949: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:28.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:28.950: INFO: Number of nodes with available pods: 1
Aug 21 23:50:28.950: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:29.974: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:29.976: INFO: Number of nodes with available pods: 1
Aug 21 23:50:29.976: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:30.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:30.949: INFO: Number of nodes with available pods: 1
Aug 21 23:50:30.949: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:31.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:31.949: INFO: Number of nodes with available pods: 1
Aug 21 23:50:31.949: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:32.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:32.950: INFO: Number of nodes with available pods: 1
Aug 21 23:50:32.950: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:33.947: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:33.951: INFO: Number of nodes with available pods: 1
Aug 21 23:50:33.951: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:34.947: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:34.950: INFO: Number of nodes with available pods: 1
Aug 21 23:50:34.950: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:35.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:35.950: INFO: Number of nodes with available pods: 1
Aug 21 23:50:35.950: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:36.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:36.950: INFO: Number of nodes with available pods: 1
Aug 21 23:50:36.950: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:37.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:37.949: INFO: Number of nodes with available pods: 1
Aug 21 23:50:37.949: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:38.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:38.947: INFO: Number of nodes with available pods: 1
Aug 21 23:50:38.947: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:39.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:39.949: INFO: Number of nodes with available pods: 1
Aug 21 23:50:39.949: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:40.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:40.949: INFO: Number of nodes with available pods: 1
Aug 21 23:50:40.949: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:41.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:41.949: INFO: Number of nodes with available pods: 1
Aug 21 23:50:41.949: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:42.947: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:42.950: INFO: Number of nodes with available pods: 1
Aug 21 23:50:42.950: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:43.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:43.951: INFO: Number of nodes with available pods: 1
Aug 21 23:50:43.951: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:50:44.969: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:50:44.972: INFO: Number of nodes with available pods: 2
Aug 21 23:50:44.973: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1687, will wait for the garbage collector to delete the pods
Aug 21 23:50:45.034: INFO: Deleting DaemonSet.extensions daemon-set took: 6.03014ms
Aug 21 23:50:45.435: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.3328ms
Aug 21 23:50:51.638: INFO: Number of nodes with available pods: 0
Aug 21 23:50:51.638: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 23:50:51.641: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1687/daemonsets","resourceVersion":"2287449"},"items":null}

Aug 21 23:50:51.644: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1687/pods","resourceVersion":"2287449"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:50:51.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1687" for this suite.

• [SLOW TEST:29.890 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":125,"skipped":2008,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:50:51.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-5594ba43-1aa5-40c5-a0e8-a961b5ae575d
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:50:51.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3687" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":126,"skipped":2020,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:50:51.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 21 23:50:51.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-415'
Aug 21 23:50:52.114: INFO: stderr: ""
Aug 21 23:50:52.114: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 23:50:52.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-415'
Aug 21 23:50:52.253: INFO: stderr: ""
Aug 21 23:50:52.253: INFO: stdout: "update-demo-nautilus-5dw77 update-demo-nautilus-lzkpv "
Aug 21 23:50:52.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5dw77 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-415'
Aug 21 23:50:52.371: INFO: stderr: ""
Aug 21 23:50:52.371: INFO: stdout: ""
Aug 21 23:50:52.371: INFO: update-demo-nautilus-5dw77 is created but not running
Aug 21 23:50:57.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-415'
Aug 21 23:50:57.484: INFO: stderr: ""
Aug 21 23:50:57.484: INFO: stdout: "update-demo-nautilus-5dw77 update-demo-nautilus-lzkpv "
Aug 21 23:50:57.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5dw77 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-415'
Aug 21 23:50:57.577: INFO: stderr: ""
Aug 21 23:50:57.577: INFO: stdout: "true"
Aug 21 23:50:57.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5dw77 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-415'
Aug 21 23:50:57.675: INFO: stderr: ""
Aug 21 23:50:57.675: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 23:50:57.675: INFO: validating pod update-demo-nautilus-5dw77
Aug 21 23:50:57.679: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 23:50:57.679: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 23:50:57.679: INFO: update-demo-nautilus-5dw77 is verified up and running
Aug 21 23:50:57.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lzkpv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-415'
Aug 21 23:50:57.764: INFO: stderr: ""
Aug 21 23:50:57.764: INFO: stdout: "true"
Aug 21 23:50:57.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lzkpv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-415'
Aug 21 23:50:57.862: INFO: stderr: ""
Aug 21 23:50:57.862: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 23:50:57.862: INFO: validating pod update-demo-nautilus-lzkpv
Aug 21 23:50:57.866: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 23:50:57.866: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 23:50:57.866: INFO: update-demo-nautilus-lzkpv is verified up and running
STEP: using delete to clean up resources
Aug 21 23:50:57.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-415'
Aug 21 23:50:57.971: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 23:50:57.971: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 21 23:50:57.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-415'
Aug 21 23:50:58.080: INFO: stderr: "No resources found in kubectl-415 namespace.\n"
Aug 21 23:50:58.080: INFO: stdout: ""
Aug 21 23:50:58.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-415 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 23:50:58.188: INFO: stderr: ""
Aug 21 23:50:58.188: INFO: stdout: "update-demo-nautilus-5dw77\nupdate-demo-nautilus-lzkpv\n"
Aug 21 23:50:58.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-415'
Aug 21 23:50:58.792: INFO: stderr: "No resources found in kubectl-415 namespace.\n"
Aug 21 23:50:58.792: INFO: stdout: ""
Aug 21 23:50:58.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-415 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 23:50:58.899: INFO: stderr: ""
Aug 21 23:50:58.899: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:50:58.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-415" for this suite.

• [SLOW TEST:7.179 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":127,"skipped":2024,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:50:58.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 23:50:59.333: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd6957af-8388-49f5-a8e5-49783cfd1576" in namespace "projected-3282" to be "success or failure"
Aug 21 23:50:59.335: INFO: Pod "downwardapi-volume-fd6957af-8388-49f5-a8e5-49783cfd1576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115013ms
Aug 21 23:51:01.473: INFO: Pod "downwardapi-volume-fd6957af-8388-49f5-a8e5-49783cfd1576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139817402s
Aug 21 23:51:03.477: INFO: Pod "downwardapi-volume-fd6957af-8388-49f5-a8e5-49783cfd1576": Phase="Running", Reason="", readiness=true. Elapsed: 4.144045367s
Aug 21 23:51:05.481: INFO: Pod "downwardapi-volume-fd6957af-8388-49f5-a8e5-49783cfd1576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148120866s
STEP: Saw pod success
Aug 21 23:51:05.481: INFO: Pod "downwardapi-volume-fd6957af-8388-49f5-a8e5-49783cfd1576" satisfied condition "success or failure"
Aug 21 23:51:05.484: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fd6957af-8388-49f5-a8e5-49783cfd1576 container client-container: 
STEP: delete the pod
Aug 21 23:51:05.536: INFO: Waiting for pod downwardapi-volume-fd6957af-8388-49f5-a8e5-49783cfd1576 to disappear
Aug 21 23:51:05.568: INFO: Pod downwardapi-volume-fd6957af-8388-49f5-a8e5-49783cfd1576 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:51:05.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3282" for this suite.

• [SLOW TEST:6.681 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2094,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:51:05.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:51:09.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4598" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2101,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:51:09.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 21 23:51:09.824: INFO: Waiting up to 5m0s for pod "pod-f5da58ee-9025-4f73-b3d8-d062ba404db6" in namespace "emptydir-1281" to be "success or failure"
Aug 21 23:51:09.855: INFO: Pod "pod-f5da58ee-9025-4f73-b3d8-d062ba404db6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.78801ms
Aug 21 23:51:11.956: INFO: Pod "pod-f5da58ee-9025-4f73-b3d8-d062ba404db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13269837s
Aug 21 23:51:13.960: INFO: Pod "pod-f5da58ee-9025-4f73-b3d8-d062ba404db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136661537s
STEP: Saw pod success
Aug 21 23:51:13.960: INFO: Pod "pod-f5da58ee-9025-4f73-b3d8-d062ba404db6" satisfied condition "success or failure"
Aug 21 23:51:13.964: INFO: Trying to get logs from node jerma-worker2 pod pod-f5da58ee-9025-4f73-b3d8-d062ba404db6 container test-container: 
STEP: delete the pod
Aug 21 23:51:14.003: INFO: Waiting for pod pod-f5da58ee-9025-4f73-b3d8-d062ba404db6 to disappear
Aug 21 23:51:14.016: INFO: Pod pod-f5da58ee-9025-4f73-b3d8-d062ba404db6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:51:14.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1281" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2124,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:51:14.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 23:51:14.607: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 23:51:16.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650674, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650674, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650674, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650674, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 23:51:19.681: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:51:29.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3338" for this suite.
STEP: Destroying namespace "webhook-3338-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.984 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":131,"skipped":2144,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:51:30.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-14307ec6-6c2c-41f1-ac23-dac397b0c198
STEP: Creating a pod to test consume configMaps
Aug 21 23:51:30.116: INFO: Waiting up to 5m0s for pod "pod-configmaps-090a1d8b-426d-4ee7-b05d-6bfdb605a65f" in namespace "configmap-7468" to be "success or failure"
Aug 21 23:51:30.136: INFO: Pod "pod-configmaps-090a1d8b-426d-4ee7-b05d-6bfdb605a65f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.382354ms
Aug 21 23:51:32.139: INFO: Pod "pod-configmaps-090a1d8b-426d-4ee7-b05d-6bfdb605a65f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022947537s
Aug 21 23:51:34.143: INFO: Pod "pod-configmaps-090a1d8b-426d-4ee7-b05d-6bfdb605a65f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026765182s
STEP: Saw pod success
Aug 21 23:51:34.143: INFO: Pod "pod-configmaps-090a1d8b-426d-4ee7-b05d-6bfdb605a65f" satisfied condition "success or failure"
Aug 21 23:51:34.146: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-090a1d8b-426d-4ee7-b05d-6bfdb605a65f container configmap-volume-test: 
STEP: delete the pod
Aug 21 23:51:34.377: INFO: Waiting for pod pod-configmaps-090a1d8b-426d-4ee7-b05d-6bfdb605a65f to disappear
Aug 21 23:51:34.431: INFO: Pod pod-configmaps-090a1d8b-426d-4ee7-b05d-6bfdb605a65f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:51:34.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7468" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2144,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:51:34.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 21 23:51:34.536: INFO: namespace kubectl-6865
Aug 21 23:51:34.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6865'
Aug 21 23:51:34.782: INFO: stderr: ""
Aug 21 23:51:34.782: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 21 23:51:35.785: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:51:35.785: INFO: Found 0 / 1
Aug 21 23:51:36.808: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:51:36.808: INFO: Found 0 / 1
Aug 21 23:51:37.786: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:51:37.786: INFO: Found 0 / 1
Aug 21 23:51:38.786: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:51:38.786: INFO: Found 1 / 1
Aug 21 23:51:38.786: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 21 23:51:38.790: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 23:51:38.790: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 21 23:51:38.790: INFO: wait on agnhost-master startup in kubectl-6865 
Aug 21 23:51:38.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-ldczr agnhost-master --namespace=kubectl-6865'
Aug 21 23:51:38.912: INFO: stderr: ""
Aug 21 23:51:38.912: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 21 23:51:38.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6865'
Aug 21 23:51:39.045: INFO: stderr: ""
Aug 21 23:51:39.045: INFO: stdout: "service/rm2 exposed\n"
Aug 21 23:51:39.054: INFO: Service rm2 in namespace kubectl-6865 found.
STEP: exposing service
Aug 21 23:51:41.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6865'
Aug 21 23:51:41.221: INFO: stderr: ""
Aug 21 23:51:41.221: INFO: stdout: "service/rm3 exposed\n"
Aug 21 23:51:41.225: INFO: Service rm3 in namespace kubectl-6865 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:51:43.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6865" for this suite.

• [SLOW TEST:8.804 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":133,"skipped":2148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:51:43.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 21 23:51:48.273: INFO: Successfully updated pod "labelsupdate7e4b43dc-6639-481b-96d3-6a99de28ed9b"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:51:51.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8416" for this suite.

• [SLOW TEST:8.007 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2195,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:51:51.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1916
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-1916
Aug 21 23:51:52.283: INFO: Found 0 stateful pods, waiting for 1
Aug 21 23:52:02.418: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 21 23:52:02.795: INFO: Deleting all statefulset in ns statefulset-1916
Aug 21 23:52:02.798: INFO: Scaling statefulset ss to 0
Aug 21 23:52:12.965: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 23:52:12.969: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:52:13.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1916" for this suite.

• [SLOW TEST:21.769 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":135,"skipped":2205,"failed":0}
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:52:13.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 21 23:52:17.693: INFO: Successfully updated pod "annotationupdate935138c0-1104-425d-b3a7-1705374de933"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:52:21.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7596" for this suite.

• [SLOW TEST:8.722 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2205,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:52:21.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-4lrq
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 23:52:21.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4lrq" in namespace "subpath-832" to be "success or failure"
Aug 21 23:52:21.841: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Pending", Reason="", readiness=false. Elapsed: 7.970086ms
Aug 21 23:52:23.845: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012335819s
Aug 21 23:52:25.849: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 4.016348237s
Aug 21 23:52:27.853: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 6.020600884s
Aug 21 23:52:29.857: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 8.024479622s
Aug 21 23:52:31.861: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 10.028240622s
Aug 21 23:52:33.865: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 12.032521796s
Aug 21 23:52:35.869: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 14.036534612s
Aug 21 23:52:37.873: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 16.040638678s
Aug 21 23:52:39.877: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 18.044392362s
Aug 21 23:52:41.881: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 20.048448029s
Aug 21 23:52:43.885: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Running", Reason="", readiness=true. Elapsed: 22.052489564s
Aug 21 23:52:45.889: INFO: Pod "pod-subpath-test-configmap-4lrq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056572877s
STEP: Saw pod success
Aug 21 23:52:45.889: INFO: Pod "pod-subpath-test-configmap-4lrq" satisfied condition "success or failure"
Aug 21 23:52:45.892: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-4lrq container test-container-subpath-configmap-4lrq: 
STEP: delete the pod
Aug 21 23:52:45.914: INFO: Waiting for pod pod-subpath-test-configmap-4lrq to disappear
Aug 21 23:52:45.918: INFO: Pod pod-subpath-test-configmap-4lrq no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4lrq
Aug 21 23:52:45.918: INFO: Deleting pod "pod-subpath-test-configmap-4lrq" in namespace "subpath-832"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:52:45.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-832" for this suite.

• [SLOW TEST:24.184 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":137,"skipped":2212,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:52:45.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0821 23:52:47.070404       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 23:52:47.070: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:52:47.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3042" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":138,"skipped":2242,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:52:47.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9973
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-9973
I0821 23:52:47.248624       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9973, replica count: 2
I0821 23:52:50.299163       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 23:52:53.299385       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 23:52:53.299: INFO: Creating new exec pod
Aug 21 23:52:58.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9973 execpodwnzrc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 21 23:52:58.577: INFO: stderr: "I0821 23:52:58.472519    3468 log.go:172] (0xc000948f20) (0xc0009a0640) Create stream\nI0821 23:52:58.472592    3468 log.go:172] (0xc000948f20) (0xc0009a0640) Stream added, broadcasting: 1\nI0821 23:52:58.477209    3468 log.go:172] (0xc000948f20) Reply frame received for 1\nI0821 23:52:58.477240    3468 log.go:172] (0xc000948f20) (0xc0009a0000) Create stream\nI0821 23:52:58.477251    3468 log.go:172] (0xc000948f20) (0xc0009a0000) Stream added, broadcasting: 3\nI0821 23:52:58.478283    3468 log.go:172] (0xc000948f20) Reply frame received for 3\nI0821 23:52:58.478324    3468 log.go:172] (0xc000948f20) (0xc0006e1b80) Create stream\nI0821 23:52:58.478337    3468 log.go:172] (0xc000948f20) (0xc0006e1b80) Stream added, broadcasting: 5\nI0821 23:52:58.479274    3468 log.go:172] (0xc000948f20) Reply frame received for 5\nI0821 23:52:58.566499    3468 log.go:172] (0xc000948f20) Data frame received for 5\nI0821 23:52:58.566527    3468 log.go:172] (0xc0006e1b80) (5) Data frame handling\nI0821 23:52:58.566547    3468 log.go:172] (0xc0006e1b80) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0821 23:52:58.566803    3468 log.go:172] (0xc000948f20) Data frame received for 5\nI0821 23:52:58.566847    3468 log.go:172] (0xc0006e1b80) (5) Data frame handling\nI0821 23:52:58.566882    3468 log.go:172] (0xc0006e1b80) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0821 23:52:58.567184    3468 log.go:172] (0xc000948f20) Data frame received for 5\nI0821 23:52:58.567207    3468 log.go:172] (0xc0006e1b80) (5) Data frame handling\nI0821 23:52:58.567233    3468 log.go:172] (0xc000948f20) Data frame received for 3\nI0821 23:52:58.567246    3468 log.go:172] (0xc0009a0000) (3) Data frame handling\nI0821 23:52:58.569378    3468 log.go:172] (0xc000948f20) Data frame received for 1\nI0821 23:52:58.569414    3468 log.go:172] (0xc0009a0640) (1) Data frame handling\nI0821 23:52:58.569434    3468 log.go:172] (0xc0009a0640) (1) Data frame sent\nI0821 23:52:58.569468    3468 log.go:172] (0xc000948f20) (0xc0009a0640) Stream removed, broadcasting: 1\nI0821 23:52:58.569500    3468 log.go:172] (0xc000948f20) Go away received\nI0821 23:52:58.569938    3468 log.go:172] (0xc000948f20) (0xc0009a0640) Stream removed, broadcasting: 1\nI0821 23:52:58.569955    3468 log.go:172] (0xc000948f20) (0xc0009a0000) Stream removed, broadcasting: 3\nI0821 23:52:58.569962    3468 log.go:172] (0xc000948f20) (0xc0006e1b80) Stream removed, broadcasting: 5\n"
Aug 21 23:52:58.578: INFO: stdout: ""
Aug 21 23:52:58.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9973 execpodwnzrc -- /bin/sh -x -c nc -zv -t -w 2 10.106.187.252 80'
Aug 21 23:52:58.789: INFO: stderr: "I0821 23:52:58.706878    3489 log.go:172] (0xc000a95290) (0xc00096a500) Create stream\nI0821 23:52:58.706938    3489 log.go:172] (0xc000a95290) (0xc00096a500) Stream added, broadcasting: 1\nI0821 23:52:58.711998    3489 log.go:172] (0xc000a95290) Reply frame received for 1\nI0821 23:52:58.712058    3489 log.go:172] (0xc000a95290) (0xc00069c640) Create stream\nI0821 23:52:58.712071    3489 log.go:172] (0xc000a95290) (0xc00069c640) Stream added, broadcasting: 3\nI0821 23:52:58.713154    3489 log.go:172] (0xc000a95290) Reply frame received for 3\nI0821 23:52:58.713197    3489 log.go:172] (0xc000a95290) (0xc0004d7400) Create stream\nI0821 23:52:58.713210    3489 log.go:172] (0xc000a95290) (0xc0004d7400) Stream added, broadcasting: 5\nI0821 23:52:58.714095    3489 log.go:172] (0xc000a95290) Reply frame received for 5\nI0821 23:52:58.777263    3489 log.go:172] (0xc000a95290) Data frame received for 5\nI0821 23:52:58.777309    3489 log.go:172] (0xc0004d7400) (5) Data frame handling\nI0821 23:52:58.777350    3489 log.go:172] (0xc0004d7400) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.187.252 80\nConnection to 10.106.187.252 80 port [tcp/http] succeeded!\nI0821 23:52:58.777591    3489 log.go:172] (0xc000a95290) Data frame received for 3\nI0821 23:52:58.777633    3489 log.go:172] (0xc00069c640) (3) Data frame handling\nI0821 23:52:58.777666    3489 log.go:172] (0xc000a95290) Data frame received for 5\nI0821 23:52:58.777697    3489 log.go:172] (0xc0004d7400) (5) Data frame handling\nI0821 23:52:58.779327    3489 log.go:172] (0xc000a95290) Data frame received for 1\nI0821 23:52:58.779355    3489 log.go:172] (0xc00096a500) (1) Data frame handling\nI0821 23:52:58.779389    3489 log.go:172] (0xc00096a500) (1) Data frame sent\nI0821 23:52:58.779420    3489 log.go:172] (0xc000a95290) (0xc00096a500) Stream removed, broadcasting: 1\nI0821 23:52:58.779515    3489 log.go:172] (0xc000a95290) Go away received\nI0821 23:52:58.779946    3489 log.go:172] (0xc000a95290) (0xc00096a500) Stream removed, broadcasting: 1\nI0821 23:52:58.779968    3489 log.go:172] (0xc000a95290) (0xc00069c640) Stream removed, broadcasting: 3\nI0821 23:52:58.779980    3489 log.go:172] (0xc000a95290) (0xc0004d7400) Stream removed, broadcasting: 5\n"
Aug 21 23:52:58.789: INFO: stdout: ""
Aug 21 23:52:58.789: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:52:58.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9973" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:11.766 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":139,"skipped":2244,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:52:58.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1782.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1782.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1782.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1782.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1782.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1782.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 23:53:11.031: INFO: DNS probes using dns-1782/dns-test-d8a59978-86bd-42ac-b531-83119a27b67d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:53:11.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1782" for this suite.

• [SLOW TEST:12.301 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":140,"skipped":2270,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:53:11.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 23:53:39.667: INFO: Container started at 2020-08-21 23:53:15 +0000 UTC, pod became ready at 2020-08-21 23:53:39 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:53:39.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9733" for this suite.

• [SLOW TEST:28.530 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2288,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:53:39.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Aug 21 23:53:39.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 21 23:53:39.819: INFO: stderr: ""
Aug 21 23:53:39.819: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:53:39.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4523" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":142,"skipped":2311,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:53:39.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 23:53:45.283: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:53:45.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8979" for this suite.

• [SLOW TEST:5.652 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2327,"failed":0}
SSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:53:45.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 21 23:53:52.748: INFO: Successfully updated pod "adopt-release-5gx6f"
STEP: Checking that the Job readopts the Pod
Aug 21 23:53:52.748: INFO: Waiting up to 15m0s for pod "adopt-release-5gx6f" in namespace "job-6298" to be "adopted"
Aug 21 23:53:52.773: INFO: Pod "adopt-release-5gx6f": Phase="Running", Reason="", readiness=true. Elapsed: 24.73891ms
Aug 21 23:53:54.777: INFO: Pod "adopt-release-5gx6f": Phase="Running", Reason="", readiness=true. Elapsed: 2.028581653s
Aug 21 23:53:54.777: INFO: Pod "adopt-release-5gx6f" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 21 23:53:55.285: INFO: Successfully updated pod "adopt-release-5gx6f"
STEP: Checking that the Job releases the Pod
Aug 21 23:53:55.285: INFO: Waiting up to 15m0s for pod "adopt-release-5gx6f" in namespace "job-6298" to be "released"
Aug 21 23:53:55.308: INFO: Pod "adopt-release-5gx6f": Phase="Running", Reason="", readiness=true. Elapsed: 23.128352ms
Aug 21 23:53:57.341: INFO: Pod "adopt-release-5gx6f": Phase="Running", Reason="", readiness=true. Elapsed: 2.056092374s
Aug 21 23:53:57.341: INFO: Pod "adopt-release-5gx6f" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:53:57.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6298" for this suite.

• [SLOW TEST:11.839 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":144,"skipped":2333,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:53:57.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 23:53:57.543: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 21 23:54:02.546: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 21 23:54:02.546: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 21 23:54:02.686: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-5680 /apis/apps/v1/namespaces/deployment-5680/deployments/test-cleanup-deployment 07f8a88c-ec52-4878-b72d-af7c93caa2b0 2288736 1 2020-08-21 23:54:02 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036170a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Aug 21 23:54:02.706: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-5680 /apis/apps/v1/namespaces/deployment-5680/replicasets/test-cleanup-deployment-55ffc6b7b6 b07a8c5b-85de-485b-9387-eb576f4e66cc 2288740 1 2020-08-21 23:54:02 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 07f8a88c-ec52-4878-b72d-af7c93caa2b0 0xc0036174c7 0xc0036174c8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003617538  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 23:54:02.707: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug 21 23:54:02.707: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-5680 /apis/apps/v1/namespaces/deployment-5680/replicasets/test-cleanup-controller 15ed6bdf-5f31-40a5-a8cd-0e6d691f62b9 2288739 1 2020-08-21 23:53:57 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 07f8a88c-ec52-4878-b72d-af7c93caa2b0 0xc0036173cf 0xc0036173e0}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003617448  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 21 23:54:02.736: INFO: Pod "test-cleanup-controller-wjwxk" is available:
&Pod{ObjectMeta:{test-cleanup-controller-wjwxk test-cleanup-controller- deployment-5680 /api/v1/namespaces/deployment-5680/pods/test-cleanup-controller-wjwxk 8a791d8a-f4e9-4de6-b0f3-e12c89124c0f 2288722 0 2020-08-21 23:53:57 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 15ed6bdf-5f31-40a5-a8cd-0e6d691f62b9 0xc0037cfe57 0xc0037cfe58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qjf2f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qjf2f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qjf2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 23:53:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 23:54:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 23:54:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 23:53:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.216,StartTime:2020-08-21 23:53:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 23:54:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1560bc88f8717c5343578e2b1040248f23f67562b4480b8e26a306bb7f51b08c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.216,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 23:54:02.736: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-l6bjj" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-l6bjj test-cleanup-deployment-55ffc6b7b6- deployment-5680 /api/v1/namespaces/deployment-5680/pods/test-cleanup-deployment-55ffc6b7b6-l6bjj f4129640-ae0d-4802-a29a-8c4fe3eee90a 2288745 0 2020-08-21 23:54:02 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 b07a8c5b-85de-485b-9387-eb576f4e66cc 0xc0037cffe7 0xc0037cffe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qjf2f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qjf2f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qjf2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 23:54:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:54:02.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5680" for this suite.

• [SLOW TEST:5.461 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":145,"skipped":2380,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:54:02.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:54:40.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-239" for this suite.

• [SLOW TEST:37.501 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2385,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:54:40.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:54:44.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-953" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2389,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:54:44.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 23:54:44.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:54:48.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2737" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2429,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:54:49.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 23:54:50.035: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 23:54:52.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650890, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650890, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650890, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650890, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 23:54:54.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650890, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650890, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650890, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733650890, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 23:54:57.396: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:54:57.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3966" for this suite.
STEP: Destroying namespace "webhook-3966-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.807 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":149,"skipped":2480,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:54:57.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 23:54:57.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 21 23:54:58.665: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T23:54:58Z generation:1 name:name1 resourceVersion:2289134 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:37ba971b-692a-4c0a-93bf-29b29c1633ca] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 21 23:55:08.669: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T23:55:08Z generation:1 name:name2 resourceVersion:2289179 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:23381013-f30a-40ae-8018-0469421d3134] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 21 23:55:18.674: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T23:54:58Z generation:2 name:name1 resourceVersion:2289207 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:37ba971b-692a-4c0a-93bf-29b29c1633ca] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 21 23:55:28.680: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T23:55:08Z generation:2 name:name2 resourceVersion:2289243 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:23381013-f30a-40ae-8018-0469421d3134] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 21 23:55:38.688: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T23:54:58Z generation:2 name:name1 resourceVersion:2289277 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:37ba971b-692a-4c0a-93bf-29b29c1633ca] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 21 23:55:48.711: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T23:55:08Z generation:2 name:name2 resourceVersion:2289307 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:23381013-f30a-40ae-8018-0469421d3134] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:55:59.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-510" for this suite.

• [SLOW TEST:61.429 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":150,"skipped":2532,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:55:59.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 23:55:59.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed8de843-3ce1-4405-99d6-b5cd5bf49c5a" in namespace "projected-436" to be "success or failure"
Aug 21 23:55:59.368: INFO: Pod "downwardapi-volume-ed8de843-3ce1-4405-99d6-b5cd5bf49c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.025017ms
Aug 21 23:56:01.384: INFO: Pod "downwardapi-volume-ed8de843-3ce1-4405-99d6-b5cd5bf49c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038480446s
Aug 21 23:56:03.510: INFO: Pod "downwardapi-volume-ed8de843-3ce1-4405-99d6-b5cd5bf49c5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165225644s
STEP: Saw pod success
Aug 21 23:56:03.510: INFO: Pod "downwardapi-volume-ed8de843-3ce1-4405-99d6-b5cd5bf49c5a" satisfied condition "success or failure"
Aug 21 23:56:03.514: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ed8de843-3ce1-4405-99d6-b5cd5bf49c5a container client-container: 
STEP: delete the pod
Aug 21 23:56:03.739: INFO: Waiting for pod downwardapi-volume-ed8de843-3ce1-4405-99d6-b5cd5bf49c5a to disappear
Aug 21 23:56:03.774: INFO: Pod downwardapi-volume-ed8de843-3ce1-4405-99d6-b5cd5bf49c5a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:56:03.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-436" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2534,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:56:03.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-9c95283f-4423-4a41-8319-2fde7204d1df
STEP: Creating secret with name s-test-opt-upd-4931b043-e355-4988-8927-588a11c7d9ac
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9c95283f-4423-4a41-8319-2fde7204d1df
STEP: Updating secret s-test-opt-upd-4931b043-e355-4988-8927-588a11c7d9ac
STEP: Creating secret with name s-test-opt-create-23877966-c85e-4372-98dc-6c750a65d036
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:57:37.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3665" for this suite.

• [SLOW TEST:93.543 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2535,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:57:37.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 23:57:37.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14" in namespace "downward-api-7366" to be "success or failure"
Aug 21 23:57:37.466: INFO: Pod "downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066169ms
Aug 21 23:57:39.687: INFO: Pod "downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223630904s
Aug 21 23:57:41.691: INFO: Pod "downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227323162s
Aug 21 23:57:43.798: INFO: Pod "downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.334461657s
Aug 21 23:57:45.803: INFO: Pod "downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.339146568s
STEP: Saw pod success
Aug 21 23:57:45.803: INFO: Pod "downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14" satisfied condition "success or failure"
Aug 21 23:57:45.806: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14 container client-container: 
STEP: delete the pod
Aug 21 23:57:45.913: INFO: Waiting for pod downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14 to disappear
Aug 21 23:57:45.932: INFO: Pod downwardapi-volume-1306dfd0-fe0e-413e-a3c3-7d963ff24d14 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:57:45.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7366" for this suite.

• [SLOW TEST:8.613 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2536,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:57:45.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3684, will wait for the garbage collector to delete the pods
Aug 21 23:57:52.539: INFO: Deleting Job.batch foo took: 6.391361ms
Aug 21 23:57:52.839: INFO: Terminating Job.batch foo pods took: 300.257989ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:58:31.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3684" for this suite.

• [SLOW TEST:45.712 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":154,"skipped":2553,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:58:31.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 21 23:58:32.354: INFO: Created pod &Pod{ObjectMeta:{dns-106  dns-106 /api/v1/namespaces/dns-106/pods/dns-106 4e3f5cf9-2752-48fd-9bec-33e88857db8a 2289916 0 2020-08-21 23:58:32 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z626j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z626j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z626j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 21 23:58:36.487: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-106 PodName:dns-106 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 23:58:36.487: INFO: >>> kubeConfig: /root/.kube/config
I0821 23:58:36.521221       6 log.go:172] (0xc002fe9ef0) (0xc002f16000) Create stream
I0821 23:58:36.521250       6 log.go:172] (0xc002fe9ef0) (0xc002f16000) Stream added, broadcasting: 1
I0821 23:58:36.523959       6 log.go:172] (0xc002fe9ef0) Reply frame received for 1
I0821 23:58:36.524003       6 log.go:172] (0xc002fe9ef0) (0xc0029d12c0) Create stream
I0821 23:58:36.524020       6 log.go:172] (0xc002fe9ef0) (0xc0029d12c0) Stream added, broadcasting: 3
I0821 23:58:36.525102       6 log.go:172] (0xc002fe9ef0) Reply frame received for 3
I0821 23:58:36.525161       6 log.go:172] (0xc002fe9ef0) (0xc002a6e140) Create stream
I0821 23:58:36.525186       6 log.go:172] (0xc002fe9ef0) (0xc002a6e140) Stream added, broadcasting: 5
I0821 23:58:36.526108       6 log.go:172] (0xc002fe9ef0) Reply frame received for 5
I0821 23:58:36.596680       6 log.go:172] (0xc002fe9ef0) Data frame received for 3
I0821 23:58:36.596701       6 log.go:172] (0xc0029d12c0) (3) Data frame handling
I0821 23:58:36.596716       6 log.go:172] (0xc0029d12c0) (3) Data frame sent
I0821 23:58:36.599179       6 log.go:172] (0xc002fe9ef0) Data frame received for 3
I0821 23:58:36.599207       6 log.go:172] (0xc0029d12c0) (3) Data frame handling
I0821 23:58:36.599229       6 log.go:172] (0xc002fe9ef0) Data frame received for 5
I0821 23:58:36.599238       6 log.go:172] (0xc002a6e140) (5) Data frame handling
I0821 23:58:36.600366       6 log.go:172] (0xc002fe9ef0) Data frame received for 1
I0821 23:58:36.600382       6 log.go:172] (0xc002f16000) (1) Data frame handling
I0821 23:58:36.600396       6 log.go:172] (0xc002f16000) (1) Data frame sent
I0821 23:58:36.600408       6 log.go:172] (0xc002fe9ef0) (0xc002f16000) Stream removed, broadcasting: 1
I0821 23:58:36.600432       6 log.go:172] (0xc002fe9ef0) Go away received
I0821 23:58:36.600543       6 log.go:172] (0xc002fe9ef0) (0xc002f16000) Stream removed, broadcasting: 1
I0821 23:58:36.600560       6 log.go:172] (0xc002fe9ef0) (0xc0029d12c0) Stream removed, broadcasting: 3
I0821 23:58:36.600565       6 log.go:172] (0xc002fe9ef0) (0xc002a6e140) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 21 23:58:36.600: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-106 PodName:dns-106 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 23:58:36.600: INFO: >>> kubeConfig: /root/.kube/config
I0821 23:58:36.625746       6 log.go:172] (0xc002ec8790) (0xc002f16460) Create stream
I0821 23:58:36.625776       6 log.go:172] (0xc002ec8790) (0xc002f16460) Stream added, broadcasting: 1
I0821 23:58:36.627835       6 log.go:172] (0xc002ec8790) Reply frame received for 1
I0821 23:58:36.627872       6 log.go:172] (0xc002ec8790) (0xc001b8e0a0) Create stream
I0821 23:58:36.627891       6 log.go:172] (0xc002ec8790) (0xc001b8e0a0) Stream added, broadcasting: 3
I0821 23:58:36.628864       6 log.go:172] (0xc002ec8790) Reply frame received for 3
I0821 23:58:36.628902       6 log.go:172] (0xc002ec8790) (0xc002f16500) Create stream
I0821 23:58:36.628916       6 log.go:172] (0xc002ec8790) (0xc002f16500) Stream added, broadcasting: 5
I0821 23:58:36.629782       6 log.go:172] (0xc002ec8790) Reply frame received for 5
I0821 23:58:36.695495       6 log.go:172] (0xc002ec8790) Data frame received for 3
I0821 23:58:36.695530       6 log.go:172] (0xc001b8e0a0) (3) Data frame handling
I0821 23:58:36.695553       6 log.go:172] (0xc001b8e0a0) (3) Data frame sent
I0821 23:58:36.697977       6 log.go:172] (0xc002ec8790) Data frame received for 3
I0821 23:58:36.698005       6 log.go:172] (0xc001b8e0a0) (3) Data frame handling
I0821 23:58:36.698058       6 log.go:172] (0xc002ec8790) Data frame received for 5
I0821 23:58:36.698085       6 log.go:172] (0xc002f16500) (5) Data frame handling
I0821 23:58:36.699309       6 log.go:172] (0xc002ec8790) Data frame received for 1
I0821 23:58:36.699323       6 log.go:172] (0xc002f16460) (1) Data frame handling
I0821 23:58:36.699332       6 log.go:172] (0xc002f16460) (1) Data frame sent
I0821 23:58:36.699343       6 log.go:172] (0xc002ec8790) (0xc002f16460) Stream removed, broadcasting: 1
I0821 23:58:36.699356       6 log.go:172] (0xc002ec8790) Go away received
I0821 23:58:36.699536       6 log.go:172] (0xc002ec8790) (0xc002f16460) Stream removed, broadcasting: 1
I0821 23:58:36.699576       6 log.go:172] (0xc002ec8790) (0xc001b8e0a0) Stream removed, broadcasting: 3
I0821 23:58:36.699608       6 log.go:172] (0xc002ec8790) (0xc002f16500) Stream removed, broadcasting: 5
Aug 21 23:58:36.699: INFO: Deleting pod dns-106...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:58:36.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-106" for this suite.

• [SLOW TEST:5.129 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":155,"skipped":2581,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:58:36.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 21 23:58:37.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3d603ed-b407-44c6-b5d1-dba61f49b5d4" in namespace "downward-api-5735" to be "success or failure"
Aug 21 23:58:37.496: INFO: Pod "downwardapi-volume-d3d603ed-b407-44c6-b5d1-dba61f49b5d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.899804ms
Aug 21 23:58:39.602: INFO: Pod "downwardapi-volume-d3d603ed-b407-44c6-b5d1-dba61f49b5d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108010981s
Aug 21 23:58:41.605: INFO: Pod "downwardapi-volume-d3d603ed-b407-44c6-b5d1-dba61f49b5d4": Phase="Running", Reason="", readiness=true. Elapsed: 4.111396538s
Aug 21 23:58:43.610: INFO: Pod "downwardapi-volume-d3d603ed-b407-44c6-b5d1-dba61f49b5d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115979033s
STEP: Saw pod success
Aug 21 23:58:43.610: INFO: Pod "downwardapi-volume-d3d603ed-b407-44c6-b5d1-dba61f49b5d4" satisfied condition "success or failure"
Aug 21 23:58:43.612: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d3d603ed-b407-44c6-b5d1-dba61f49b5d4 container client-container: 
STEP: delete the pod
Aug 21 23:58:43.652: INFO: Waiting for pod downwardapi-volume-d3d603ed-b407-44c6-b5d1-dba61f49b5d4 to disappear
Aug 21 23:58:43.663: INFO: Pod downwardapi-volume-d3d603ed-b407-44c6-b5d1-dba61f49b5d4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:58:43.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5735" for this suite.

• [SLOW TEST:6.889 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2606,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:58:43.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-52a17d59-a477-44d2-bb16-1c246fc15bc1
STEP: Creating a pod to test consume configMaps
Aug 21 23:58:43.801: INFO: Waiting up to 5m0s for pod "pod-configmaps-38c8748d-8ca3-4a50-983f-5d1c0956a038" in namespace "configmap-522" to be "success or failure"
Aug 21 23:58:43.815: INFO: Pod "pod-configmaps-38c8748d-8ca3-4a50-983f-5d1c0956a038": Phase="Pending", Reason="", readiness=false. Elapsed: 14.630827ms
Aug 21 23:58:45.819: INFO: Pod "pod-configmaps-38c8748d-8ca3-4a50-983f-5d1c0956a038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018443781s
Aug 21 23:58:47.823: INFO: Pod "pod-configmaps-38c8748d-8ca3-4a50-983f-5d1c0956a038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022233279s
STEP: Saw pod success
Aug 21 23:58:47.823: INFO: Pod "pod-configmaps-38c8748d-8ca3-4a50-983f-5d1c0956a038" satisfied condition "success or failure"
Aug 21 23:58:47.826: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-38c8748d-8ca3-4a50-983f-5d1c0956a038 container configmap-volume-test: 
STEP: delete the pod
Aug 21 23:58:47.887: INFO: Waiting for pod pod-configmaps-38c8748d-8ca3-4a50-983f-5d1c0956a038 to disappear
Aug 21 23:58:47.990: INFO: Pod pod-configmaps-38c8748d-8ca3-4a50-983f-5d1c0956a038 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:58:47.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-522" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2617,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:58:48.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-36dc334f-6f08-48ce-87b0-5a8a84efc0d7
STEP: Creating a pod to test consume configMaps
Aug 21 23:58:48.707: INFO: Waiting up to 5m0s for pod "pod-configmaps-15ebe809-a537-4d6c-95f6-a577681b2cde" in namespace "configmap-4240" to be "success or failure"
Aug 21 23:58:48.730: INFO: Pod "pod-configmaps-15ebe809-a537-4d6c-95f6-a577681b2cde": Phase="Pending", Reason="", readiness=false. Elapsed: 23.258189ms
Aug 21 23:58:50.847: INFO: Pod "pod-configmaps-15ebe809-a537-4d6c-95f6-a577681b2cde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139573035s
Aug 21 23:58:52.851: INFO: Pod "pod-configmaps-15ebe809-a537-4d6c-95f6-a577681b2cde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143801736s
STEP: Saw pod success
Aug 21 23:58:52.851: INFO: Pod "pod-configmaps-15ebe809-a537-4d6c-95f6-a577681b2cde" satisfied condition "success or failure"
Aug 21 23:58:52.854: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-15ebe809-a537-4d6c-95f6-a577681b2cde container configmap-volume-test: 
STEP: delete the pod
Aug 21 23:58:53.014: INFO: Waiting for pod pod-configmaps-15ebe809-a537-4d6c-95f6-a577681b2cde to disappear
Aug 21 23:58:53.023: INFO: Pod pod-configmaps-15ebe809-a537-4d6c-95f6-a577681b2cde no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:58:53.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4240" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2643,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:58:53.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 21 23:59:00.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6236" for this suite.

• [SLOW TEST:7.106 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":159,"skipped":2659,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 21 23:59:00.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 21 23:59:00.268: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 23:59:00.276: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:00.281: INFO: Number of nodes with available pods: 0
Aug 21 23:59:00.281: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:01.369: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:01.372: INFO: Number of nodes with available pods: 0
Aug 21 23:59:01.372: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:02.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:02.384: INFO: Number of nodes with available pods: 0
Aug 21 23:59:02.384: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:03.284: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:03.614: INFO: Number of nodes with available pods: 0
Aug 21 23:59:03.614: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:04.286: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:04.335: INFO: Number of nodes with available pods: 0
Aug 21 23:59:04.335: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:05.290: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:05.304: INFO: Number of nodes with available pods: 0
Aug 21 23:59:05.304: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:06.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:06.291: INFO: Number of nodes with available pods: 2
Aug 21 23:59:06.291: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 21 23:59:06.374: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:06.374: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:06.395: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:07.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:07.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:07.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:08.531: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:08.531: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:08.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:09.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:09.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:09.404: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:10.399: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:10.399: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:10.399: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:10.402: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:11.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:11.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:11.400: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:11.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:12.399: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:12.399: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:12.399: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:12.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:13.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:13.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:13.400: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:13.404: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:14.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:14.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:14.400: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:14.404: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:15.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:15.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:15.400: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:15.404: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:16.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:16.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:16.400: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:16.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:17.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:17.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:17.400: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:17.404: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:18.464: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:18.464: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:18.464: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:18.468: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:19.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:19.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:19.400: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:19.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:20.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:20.400: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:20.400: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:20.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:21.401: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:21.401: INFO: Wrong image for pod: daemon-set-l6xhd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:21.401: INFO: Pod daemon-set-l6xhd is not available
Aug 21 23:59:21.404: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:22.494: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:22.494: INFO: Pod daemon-set-vwt2l is not available
Aug 21 23:59:22.498: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:23.518: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:23.518: INFO: Pod daemon-set-vwt2l is not available
Aug 21 23:59:23.546: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:24.404: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:24.404: INFO: Pod daemon-set-vwt2l is not available
Aug 21 23:59:24.596: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:25.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:25.400: INFO: Pod daemon-set-vwt2l is not available
Aug 21 23:59:25.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:26.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:26.400: INFO: Pod daemon-set-vwt2l is not available
Aug 21 23:59:26.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:27.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:27.400: INFO: Pod daemon-set-vwt2l is not available
Aug 21 23:59:27.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:28.399: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:28.402: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:29.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:29.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:30.620: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:30.620: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:30.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:31.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:31.400: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:31.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:32.410: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:32.410: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:32.414: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:33.434: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:33.434: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:33.438: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:34.519: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:34.519: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:34.522: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:35.482: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:35.482: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:35.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:36.401: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:36.401: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:36.404: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:37.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:37.400: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:37.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:38.401: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:38.401: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:38.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:39.433: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:39.433: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:39.463: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:40.399: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:40.399: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:40.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:41.400: INFO: Wrong image for pod: daemon-set-jzh9b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 23:59:41.400: INFO: Pod daemon-set-jzh9b is not available
Aug 21 23:59:41.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:42.400: INFO: Pod daemon-set-4vngl is not available
Aug 21 23:59:42.404: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 21 23:59:42.408: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:42.411: INFO: Number of nodes with available pods: 1
Aug 21 23:59:42.411: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:43.416: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:43.419: INFO: Number of nodes with available pods: 1
Aug 21 23:59:43.419: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:44.416: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:44.419: INFO: Number of nodes with available pods: 1
Aug 21 23:59:44.419: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:45.415: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:45.418: INFO: Number of nodes with available pods: 1
Aug 21 23:59:45.418: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:46.639: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:46.687: INFO: Number of nodes with available pods: 1
Aug 21 23:59:46.687: INFO: Node jerma-worker is running more than one daemon pod
Aug 21 23:59:47.450: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 23:59:47.454: INFO: Number of nodes with available pods: 2
Aug 21 23:59:47.454: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9285, will wait for the garbage collector to delete the pods
Aug 21 23:59:47.525: INFO: Deleting DaemonSet.extensions daemon-set took: 5.896657ms
Aug 21 23:59:47.825: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265898ms
Aug 22 00:00:01.728: INFO: Number of nodes with available pods: 0
Aug 22 00:00:01.728: INFO: Number of running nodes: 0, number of available pods: 0
Aug 22 00:00:01.731: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9285/daemonsets","resourceVersion":"2290375"},"items":null}

Aug 22 00:00:01.733: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9285/pods","resourceVersion":"2290375"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:00:01.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9285" for this suite.

• [SLOW TEST:61.629 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":160,"skipped":2669,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:00:01.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-430a7f01-96d7-4663-9fc5-144f7a3fc3e5
STEP: Creating a pod to test consume configMaps
Aug 22 00:00:01.912: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc144340-00c5-494a-8d56-baba656b5755" in namespace "configmap-9427" to be "success or failure"
Aug 22 00:00:01.977: INFO: Pod "pod-configmaps-bc144340-00c5-494a-8d56-baba656b5755": Phase="Pending", Reason="", readiness=false. Elapsed: 64.080038ms
Aug 22 00:00:03.980: INFO: Pod "pod-configmaps-bc144340-00c5-494a-8d56-baba656b5755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067638616s
Aug 22 00:00:05.985: INFO: Pod "pod-configmaps-bc144340-00c5-494a-8d56-baba656b5755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072306315s
STEP: Saw pod success
Aug 22 00:00:05.985: INFO: Pod "pod-configmaps-bc144340-00c5-494a-8d56-baba656b5755" satisfied condition "success or failure"
Aug 22 00:00:05.988: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-bc144340-00c5-494a-8d56-baba656b5755 container configmap-volume-test: 
STEP: delete the pod
Aug 22 00:00:06.140: INFO: Waiting for pod pod-configmaps-bc144340-00c5-494a-8d56-baba656b5755 to disappear
Aug 22 00:00:06.163: INFO: Pod pod-configmaps-bc144340-00c5-494a-8d56-baba656b5755 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:00:06.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9427" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2672,"failed":0}
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:00:06.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Aug 22 00:00:06.326: INFO: Waiting up to 5m0s for pod "client-containers-904a07e2-f84f-450b-8025-673aecaffbd2" in namespace "containers-9140" to be "success or failure"
Aug 22 00:00:06.331: INFO: Pod "client-containers-904a07e2-f84f-450b-8025-673aecaffbd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094759ms
Aug 22 00:00:08.823: INFO: Pod "client-containers-904a07e2-f84f-450b-8025-673aecaffbd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.496950319s
Aug 22 00:00:10.943: INFO: Pod "client-containers-904a07e2-f84f-450b-8025-673aecaffbd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.616847208s
STEP: Saw pod success
Aug 22 00:00:10.943: INFO: Pod "client-containers-904a07e2-f84f-450b-8025-673aecaffbd2" satisfied condition "success or failure"
Aug 22 00:00:10.947: INFO: Trying to get logs from node jerma-worker2 pod client-containers-904a07e2-f84f-450b-8025-673aecaffbd2 container test-container: 
STEP: delete the pod
Aug 22 00:00:11.033: INFO: Waiting for pod client-containers-904a07e2-f84f-450b-8025-673aecaffbd2 to disappear
Aug 22 00:00:11.244: INFO: Pod client-containers-904a07e2-f84f-450b-8025-673aecaffbd2 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:00:11.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9140" for this suite.

• [SLOW TEST:5.302 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2678,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:00:11.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-96129628-f7ea-49ac-8850-9f88770c396a
STEP: Creating a pod to test consume configMaps
Aug 22 00:00:12.293: INFO: Waiting up to 5m0s for pod "pod-configmaps-814af3c2-110a-49dd-bc85-2c6a3b79d37a" in namespace "configmap-4813" to be "success or failure"
Aug 22 00:00:12.482: INFO: Pod "pod-configmaps-814af3c2-110a-49dd-bc85-2c6a3b79d37a": Phase="Pending", Reason="", readiness=false. Elapsed: 189.20022ms
Aug 22 00:00:14.506: INFO: Pod "pod-configmaps-814af3c2-110a-49dd-bc85-2c6a3b79d37a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212718454s
Aug 22 00:00:16.508: INFO: Pod "pod-configmaps-814af3c2-110a-49dd-bc85-2c6a3b79d37a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.215058565s
STEP: Saw pod success
Aug 22 00:00:16.508: INFO: Pod "pod-configmaps-814af3c2-110a-49dd-bc85-2c6a3b79d37a" satisfied condition "success or failure"
Aug 22 00:00:16.530: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-814af3c2-110a-49dd-bc85-2c6a3b79d37a container configmap-volume-test: 
STEP: delete the pod
Aug 22 00:00:16.933: INFO: Waiting for pod pod-configmaps-814af3c2-110a-49dd-bc85-2c6a3b79d37a to disappear
Aug 22 00:00:16.982: INFO: Pod pod-configmaps-814af3c2-110a-49dd-bc85-2c6a3b79d37a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:00:16.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4813" for this suite.

• [SLOW TEST:5.641 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2691,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:00:17.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-ed220b17-fc84-4fb1-8da5-f9fba1e6971a
STEP: Creating configMap with name cm-test-opt-upd-3b2a912d-6a40-49d8-a920-c5b43b4c7774
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ed220b17-fc84-4fb1-8da5-f9fba1e6971a
STEP: Updating configmap cm-test-opt-upd-3b2a912d-6a40-49d8-a920-c5b43b4c7774
STEP: Creating configMap with name cm-test-opt-create-fe7b1786-49f5-47fe-8682-ec2d8c7f0bad
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:01:32.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3254" for this suite.

• [SLOW TEST:75.175 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2711,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:01:32.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:01:32.360: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:01:38.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8812" for this suite.

• [SLOW TEST:6.551 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":165,"skipped":2721,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:01:38.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:01:39.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-6903" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":166,"skipped":2729,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:01:39.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:01:39.416: INFO: Creating ReplicaSet my-hostname-basic-9b699edd-0887-4d16-9cbc-fdf349639a87
Aug 22 00:01:39.467: INFO: Pod name my-hostname-basic-9b699edd-0887-4d16-9cbc-fdf349639a87: Found 0 pods out of 1
Aug 22 00:01:44.477: INFO: Pod name my-hostname-basic-9b699edd-0887-4d16-9cbc-fdf349639a87: Found 1 pods out of 1
Aug 22 00:01:44.477: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9b699edd-0887-4d16-9cbc-fdf349639a87" is running
Aug 22 00:01:44.479: INFO: Pod "my-hostname-basic-9b699edd-0887-4d16-9cbc-fdf349639a87-qrmpb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 00:01:39 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 00:01:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 00:01:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-22 00:01:39 +0000 UTC Reason: Message:}])
Aug 22 00:01:44.479: INFO: Trying to dial the pod
Aug 22 00:01:49.488: INFO: Controller my-hostname-basic-9b699edd-0887-4d16-9cbc-fdf349639a87: Got expected result from replica 1 [my-hostname-basic-9b699edd-0887-4d16-9cbc-fdf349639a87-qrmpb]: "my-hostname-basic-9b699edd-0887-4d16-9cbc-fdf349639a87-qrmpb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:01:49.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5861" for this suite.

• [SLOW TEST:10.126 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":167,"skipped":2749,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:01:49.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 22 00:01:53.625: INFO: &Pod{ObjectMeta:{send-events-a016c31e-d2fe-45d5-8e47-a31b576a057f  events-3497 /api/v1/namespaces/events-3497/pods/send-events-a016c31e-d2fe-45d5-8e47-a31b576a057f 0a6dd077-6a1e-41c9-89b3-1ce33543a1f6 2290940 0 2020-08-22 00:01:49 +0000 UTC   map[name:foo time:548547707] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kw9ng,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kw9ng,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kw9ng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:01:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:01:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:01:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:01:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.227,StartTime:2020-08-22 00:01:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:01:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://bbb13cd4f0f09e56c6912e6e6391611c71363d667a852e8635233b88e50d60ee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 22 00:01:55.630: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 22 00:01:57.634: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:01:57.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3497" for this suite.

• [SLOW TEST:8.231 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":168,"skipped":2795,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:01:57.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 22 00:02:02.316: INFO: Successfully updated pod "labelsupdate7f51b762-4a8f-4db5-b567-10037cef33f8"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:02:04.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2179" for this suite.

• [SLOW TEST:6.656 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2797,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:02:04.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8495
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 22 00:02:04.497: INFO: Found 0 stateful pods, waiting for 3
Aug 22 00:02:14.501: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 00:02:14.501: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 00:02:14.502: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 22 00:02:24.502: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 00:02:24.502: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 00:02:24.502: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 22 00:02:24.530: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 22 00:02:34.574: INFO: Updating stateful set ss2
Aug 22 00:02:34.630: INFO: Waiting for Pod statefulset-8495/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 22 00:02:45.333: INFO: Found 2 stateful pods, waiting for 3
Aug 22 00:02:55.336: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 00:02:55.336: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 22 00:02:55.336: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 22 00:02:55.354: INFO: Updating stateful set ss2
Aug 22 00:02:55.402: INFO: Waiting for Pod statefulset-8495/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 22 00:03:05.423: INFO: Updating stateful set ss2
Aug 22 00:03:05.891: INFO: Waiting for StatefulSet statefulset-8495/ss2 to complete update
Aug 22 00:03:05.891: INFO: Waiting for Pod statefulset-8495/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 22 00:03:15.896: INFO: Waiting for StatefulSet statefulset-8495/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 22 00:03:25.899: INFO: Deleting all statefulset in ns statefulset-8495
Aug 22 00:03:25.902: INFO: Scaling statefulset ss2 to 0
Aug 22 00:03:46.039: INFO: Waiting for statefulset status.replicas updated to 0
Aug 22 00:03:46.042: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:03:46.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8495" for this suite.

• [SLOW TEST:101.686 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":170,"skipped":2805,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:03:46.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:04:06.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5978" for this suite.

• [SLOW TEST:20.171 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":171,"skipped":2826,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:04:06.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 22 00:04:06.295: INFO: Waiting up to 5m0s for pod "downward-api-2df9b240-4a39-4e7f-bc5a-deb15425e82b" in namespace "downward-api-2950" to be "success or failure"
Aug 22 00:04:06.298: INFO: Pod "downward-api-2df9b240-4a39-4e7f-bc5a-deb15425e82b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488262ms
Aug 22 00:04:08.303: INFO: Pod "downward-api-2df9b240-4a39-4e7f-bc5a-deb15425e82b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008003136s
Aug 22 00:04:10.310: INFO: Pod "downward-api-2df9b240-4a39-4e7f-bc5a-deb15425e82b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015463374s
STEP: Saw pod success
Aug 22 00:04:10.310: INFO: Pod "downward-api-2df9b240-4a39-4e7f-bc5a-deb15425e82b" satisfied condition "success or failure"
Aug 22 00:04:10.313: INFO: Trying to get logs from node jerma-worker2 pod downward-api-2df9b240-4a39-4e7f-bc5a-deb15425e82b container dapi-container: 
STEP: delete the pod
Aug 22 00:04:10.362: INFO: Waiting for pod downward-api-2df9b240-4a39-4e7f-bc5a-deb15425e82b to disappear
Aug 22 00:04:10.370: INFO: Pod downward-api-2df9b240-4a39-4e7f-bc5a-deb15425e82b no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:04:10.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2950" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2853,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:04:10.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-e5094a47-8df9-4c9a-a8f8-259064782a18
STEP: Creating a pod to test consume configMaps
Aug 22 00:04:10.684: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-34c72974-cb3d-4a0a-9119-b9857c4c0a29" in namespace "projected-8149" to be "success or failure"
Aug 22 00:04:10.706: INFO: Pod "pod-projected-configmaps-34c72974-cb3d-4a0a-9119-b9857c4c0a29": Phase="Pending", Reason="", readiness=false. Elapsed: 21.531522ms
Aug 22 00:04:12.709: INFO: Pod "pod-projected-configmaps-34c72974-cb3d-4a0a-9119-b9857c4c0a29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024403792s
Aug 22 00:04:14.713: INFO: Pod "pod-projected-configmaps-34c72974-cb3d-4a0a-9119-b9857c4c0a29": Phase="Running", Reason="", readiness=true. Elapsed: 4.02826178s
Aug 22 00:04:16.717: INFO: Pod "pod-projected-configmaps-34c72974-cb3d-4a0a-9119-b9857c4c0a29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032424317s
STEP: Saw pod success
Aug 22 00:04:16.717: INFO: Pod "pod-projected-configmaps-34c72974-cb3d-4a0a-9119-b9857c4c0a29" satisfied condition "success or failure"
Aug 22 00:04:16.720: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-34c72974-cb3d-4a0a-9119-b9857c4c0a29 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 22 00:04:16.817: INFO: Waiting for pod pod-projected-configmaps-34c72974-cb3d-4a0a-9119-b9857c4c0a29 to disappear
Aug 22 00:04:16.829: INFO: Pod pod-projected-configmaps-34c72974-cb3d-4a0a-9119-b9857c4c0a29 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:04:16.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8149" for this suite.

• [SLOW TEST:6.461 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2869,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:04:16.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:04:16.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8879'
Aug 22 00:04:20.152: INFO: stderr: ""
Aug 22 00:04:20.152: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 22 00:04:20.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8879'
Aug 22 00:04:20.416: INFO: stderr: ""
Aug 22 00:04:20.416: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 22 00:04:21.421: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 00:04:21.421: INFO: Found 0 / 1
Aug 22 00:04:22.465: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 00:04:22.465: INFO: Found 0 / 1
Aug 22 00:04:23.421: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 00:04:23.421: INFO: Found 1 / 1
Aug 22 00:04:23.421: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 22 00:04:23.423: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 22 00:04:23.423: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 22 00:04:23.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-pc5mm --namespace=kubectl-8879'
Aug 22 00:04:23.551: INFO: stderr: ""
Aug 22 00:04:23.551: INFO: stdout: "Name:         agnhost-master-pc5mm\nNamespace:    kubectl-8879\nPriority:     0\nNode:         jerma-worker2/172.18.0.3\nStart Time:   Sat, 22 Aug 2020 00:04:20 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.219\nIPs:\n  IP:           10.244.1.219\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://606eaed58c1a520ac496ef94595762c55a7587641384a0a6109cdb91c1d0d789\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 22 Aug 2020 00:04:23 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xs8q2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-xs8q2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-xs8q2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  3s    default-scheduler       Successfully assigned kubectl-8879/agnhost-master-pc5mm to jerma-worker2\n  Normal  Pulled     2s    kubelet, jerma-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s    kubelet, jerma-worker2  Created container agnhost-master\n  Normal  Started    0s    kubelet, jerma-worker2  Started container agnhost-master\n"
Aug 22 00:04:23.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8879'
Aug 22 00:04:23.680: INFO: stderr: ""
Aug 22 00:04:23.680: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-8879\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-master-pc5mm\n"
Aug 22 00:04:23.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8879'
Aug 22 00:04:23.788: INFO: stderr: ""
Aug 22 00:04:23.788: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-8879\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.83.156\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.219:6379\nSession Affinity:  None\nEvents:            \n"
Aug 22 00:04:23.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Aug 22 00:04:23.912: INFO: stderr: ""
Aug 22 00:04:23.912: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:37:06 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Sat, 22 Aug 2020 00:04:20 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sat, 22 Aug 2020 00:02:48 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sat, 22 Aug 2020 00:02:48 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sat, 22 Aug 2020 00:02:48 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sat, 22 Aug 2020 00:02:48 +0000   Sat, 15 Aug 2020 09:37:40 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.10\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 e52c45bc589d48d995e8fd79ad5bf250\n  System UUID:                b981bdc7-d264-48ef-ab5e-3308e23aaf13\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-bvrm4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     6d14h\n  kube-system                 coredns-6955765f44-db8rh                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     6d14h\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d14h\n  kube-system                 kindnet-j88mt                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      6d14h\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         6d14h\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         6d14h\n  kube-system                 kube-proxy-hmb6l                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d14h\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         6d14h\n  local-path-storage          local-path-provisioner-58f6947c7-p2cqw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d14h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 22 00:04:23.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8879'
Aug 22 00:04:24.008: INFO: stderr: ""
Aug 22 00:04:24.008: INFO: stdout: "Name:         kubectl-8879\nLabels:       e2e-framework=kubectl\n              e2e-run=92598c81-d644-4ac2-836d-c37dc3b59cf3\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:04:24.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8879" for this suite.

• [SLOW TEST:7.175 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":174,"skipped":2887,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:04:24.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:04:26.030: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:04:28.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733651466, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733651466, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733651466, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733651465, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:04:31.233: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:04:31.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2573-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:04:32.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3006" for this suite.
STEP: Destroying namespace "webhook-3006-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.475 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":175,"skipped":2889,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:04:32.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:04:48.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2222" for this suite.

• [SLOW TEST:16.233 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":176,"skipped":2924,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:04:48.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 22 00:04:48.796: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:04:57.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2364" for this suite.

• [SLOW TEST:9.175 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":177,"skipped":2989,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:04:57.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 22 00:04:57.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-726'
Aug 22 00:04:58.098: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 22 00:04:58.098: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Aug 22 00:05:00.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-726'
Aug 22 00:05:00.551: INFO: stderr: ""
Aug 22 00:05:00.551: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:05:00.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-726" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":178,"skipped":2995,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:05:00.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 22 00:05:00.753: INFO: Waiting up to 5m0s for pod "pod-37019d90-95ea-49a5-9dba-d53c07725a73" in namespace "emptydir-4892" to be "success or failure"
Aug 22 00:05:00.767: INFO: Pod "pod-37019d90-95ea-49a5-9dba-d53c07725a73": Phase="Pending", Reason="", readiness=false. Elapsed: 14.380057ms
Aug 22 00:05:02.772: INFO: Pod "pod-37019d90-95ea-49a5-9dba-d53c07725a73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018582388s
Aug 22 00:05:04.776: INFO: Pod "pod-37019d90-95ea-49a5-9dba-d53c07725a73": Phase="Running", Reason="", readiness=true. Elapsed: 4.022598213s
Aug 22 00:05:06.784: INFO: Pod "pod-37019d90-95ea-49a5-9dba-d53c07725a73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030828369s
STEP: Saw pod success
Aug 22 00:05:06.784: INFO: Pod "pod-37019d90-95ea-49a5-9dba-d53c07725a73" satisfied condition "success or failure"
Aug 22 00:05:06.786: INFO: Trying to get logs from node jerma-worker pod pod-37019d90-95ea-49a5-9dba-d53c07725a73 container test-container: 
STEP: delete the pod
Aug 22 00:05:06.802: INFO: Waiting for pod pod-37019d90-95ea-49a5-9dba-d53c07725a73 to disappear
Aug 22 00:05:06.806: INFO: Pod pod-37019d90-95ea-49a5-9dba-d53c07725a73 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:05:06.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4892" for this suite.

• [SLOW TEST:6.253 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":3032,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:05:06.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:05:06.917: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a3cc92cc-6917-45c5-92a2-2430b80d5159" in namespace "security-context-test-3787" to be "success or failure"
Aug 22 00:05:06.948: INFO: Pod "alpine-nnp-false-a3cc92cc-6917-45c5-92a2-2430b80d5159": Phase="Pending", Reason="", readiness=false. Elapsed: 30.650129ms
Aug 22 00:05:08.952: INFO: Pod "alpine-nnp-false-a3cc92cc-6917-45c5-92a2-2430b80d5159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034587715s
Aug 22 00:05:10.956: INFO: Pod "alpine-nnp-false-a3cc92cc-6917-45c5-92a2-2430b80d5159": Phase="Running", Reason="", readiness=true. Elapsed: 4.038017909s
Aug 22 00:05:12.959: INFO: Pod "alpine-nnp-false-a3cc92cc-6917-45c5-92a2-2430b80d5159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041603501s
Aug 22 00:05:12.959: INFO: Pod "alpine-nnp-false-a3cc92cc-6917-45c5-92a2-2430b80d5159" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:05:12.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3787" for this suite.

• [SLOW TEST:6.162 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3044,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:05:12.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0822 00:05:53.283709       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 22 00:05:53.283: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:05:53.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8767" for this suite.

• [SLOW TEST:40.316 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":181,"skipped":3047,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:05:53.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:05:53.354: INFO: Creating deployment "webserver-deployment"
Aug 22 00:05:53.405: INFO: Waiting for observed generation 1
Aug 22 00:05:55.815: INFO: Waiting for all required pods to come up
Aug 22 00:05:55.841: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 22 00:06:05.966: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 22 00:06:06.043: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 22 00:06:06.252: INFO: Updating deployment webserver-deployment
Aug 22 00:06:06.253: INFO: Waiting for observed generation 2
Aug 22 00:06:08.694: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 22 00:06:08.745: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 22 00:06:08.763: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 22 00:06:09.698: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 22 00:06:09.698: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 22 00:06:09.899: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 22 00:06:09.903: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 22 00:06:09.903: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 22 00:06:09.910: INFO: Updating deployment webserver-deployment
Aug 22 00:06:09.910: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 22 00:06:10.834: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 22 00:06:13.853: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 22 00:06:14.663: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-175 /apis/apps/v1/namespaces/deployment-175/deployments/webserver-deployment 076d4315-b558-4e8c-bf21-51516a0dc58f 2292849 3 2020-08-22 00:05:53 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00081f4b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-22 00:06:10 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-22 00:06:11 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 22 00:06:15.367: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-175 /apis/apps/v1/namespaces/deployment-175/replicasets/webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 2292826 3 2020-08-22 00:06:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 076d4315-b558-4e8c-bf21-51516a0dc58f 0xc00418d7c7 0xc00418d7c8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00418d838  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 00:06:15.368: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 22 00:06:15.368: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-175 /apis/apps/v1/namespaces/deployment-175/replicasets/webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 2292845 3 2020-08-22 00:05:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 076d4315-b558-4e8c-bf21-51516a0dc58f 0xc00418d707 0xc00418d708}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00418d768  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 22 00:06:15.705: INFO: Pod "webserver-deployment-595b5b9587-5bv54" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5bv54 webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-5bv54 f078faad-81b7-4737-90ac-52bc6b80c444 2292829 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc00418dce7 0xc00418dce8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 00:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.705: INFO: Pod "webserver-deployment-595b5b9587-5ld9f" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5ld9f webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-5ld9f 1f88efd1-c9f8-487a-898a-28ed2bb431c0 2292786 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc00418de47 0xc00418de48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.705: INFO: Pod "webserver-deployment-595b5b9587-5xd66" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5xd66 webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-5xd66 bf3bf6e1-ec5e-4b37-845c-0402c2ed602e 2292680 0 2020-08-22 00:05:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc00418df77 0xc00418df78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.231,StartTime:2020-08-22 00:05:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:06:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c1d7e9b55ea0472112a1329274b397d77eb381856e44a1d4fd52e1dceff9b98d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.706: INFO: Pod "webserver-deployment-595b5b9587-7tbk7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7tbk7 webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-7tbk7 b1810249-6306-4ef1-bbcb-24f16e67fe85 2292857 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aee187 0xc003aee188}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 00:06:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.706: INFO: Pod "webserver-deployment-595b5b9587-7zzf9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7zzf9 webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-7zzf9 306c9e42-88cf-4a81-bdd8-eb10d4e37733 2292873 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aee2f7 0xc003aee2f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 00:06:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.706: INFO: Pod "webserver-deployment-595b5b9587-dqptd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dqptd webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-dqptd 8ebcedce-606b-4fa6-818f-e03585baeb92 2292870 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aee457 0xc003aee458}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 00:06:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.706: INFO: Pod "webserver-deployment-595b5b9587-dzsg9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dzsg9 webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-dzsg9 0f837578-e110-4bb2-ade6-3528e8ad895a 2292816 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aee5c7 0xc003aee5c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.706: INFO: Pod "webserver-deployment-595b5b9587-gcmfs" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gcmfs webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-gcmfs 57ca3755-6aca-4fa9-9bf8-1abdac3f7492 2292671 0 2020-08-22 00:05:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aee6e7 0xc003aee6e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.230,StartTime:2020-08-22 00:05:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:06:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f75b2dc20ed0504ead761587be86501b449462c309925b5aa934effab322d0cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.706: INFO: Pod "webserver-deployment-595b5b9587-gngnq" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gngnq webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-gngnq 0e4e6dd2-6f62-4eb0-a667-5967ae2f8a15 2292600 0 2020-08-22 00:05:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aee887 0xc003aee888}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.228,StartTime:2020-08-22 00:05:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:06:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://efce62964494689814a7b4498da9bc7184002266ef5d2d5d8d2c48f9e3709c10,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.707: INFO: Pod "webserver-deployment-595b5b9587-gq6sg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gq6sg webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-gq6sg a90831fb-28e9-4c08-8d6e-c90d34516066 2292819 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aeea07 0xc003aeea08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.707: INFO: Pod "webserver-deployment-595b5b9587-hstsj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hstsj webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-hstsj a82e2b7d-bfa7-48d7-b906-4fc3c38bd9f7 2292815 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aeeb27 0xc003aeeb28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.707: INFO: Pod "webserver-deployment-595b5b9587-kjv9p" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kjv9p webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-kjv9p 556863ea-100d-4ddc-bb1f-a8894c79f11d 2292817 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aeec47 0xc003aeec48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.707: INFO: Pod "webserver-deployment-595b5b9587-mkx5k" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mkx5k webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-mkx5k bd06d407-2d31-40eb-ab6e-f948da222fa3 2292623 0 2020-08-22 00:05:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aeed67 0xc003aeed68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.246,StartTime:2020-08-22 00:05:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:06:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f4d9c7cd484f09701c511b9eeb6abd62f68c50a2e946c0e8d787c609d2fd7487,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.707: INFO: Pod "webserver-deployment-595b5b9587-pnlq7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pnlq7 webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-pnlq7 04ab8334-1a1b-452e-88a9-1ad863138d1c 2292867 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aeeee7 0xc003aeeee8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 00:06:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.708: INFO: Pod "webserver-deployment-595b5b9587-qjfpr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qjfpr webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-qjfpr 30d4bccf-19e6-4f91-a06d-c6772b46b88b 2292617 0 2020-08-22 00:05:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aef047 0xc003aef048}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.247,StartTime:2020-08-22 00:05:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:06:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6b06e3eda3ff033f5ab4f738492021344e7cde79c0fe97bf5b68f93fd14c21a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.708: INFO: Pod "webserver-deployment-595b5b9587-r468x" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-r468x webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-r468x 3470e4d1-b494-4ce6-a8cd-0f4334570a1d 2292821 0 2020-08-22 00:06:09 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aef1c7 0xc003aef1c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 00:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.708: INFO: Pod "webserver-deployment-595b5b9587-vf6c8" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vf6c8 webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-vf6c8 3fbf4e89-0dd7-4a9b-b72d-a8614485dda8 2292567 0 2020-08-22 00:05:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aef327 0xc003aef328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.244,StartTime:2020-08-22 00:05:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:06:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://413134192e1a960ea7838e32452a27d40d1ff6f575c3a9e8ca936b9620d10ea6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.708: INFO: Pod "webserver-deployment-595b5b9587-x76rd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x76rd webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-x76rd 875b5eb1-a723-49ee-8b39-cf2cd86665af 2292597 0 2020-08-22 00:05:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aef4b7 0xc003aef4b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.227,StartTime:2020-08-22 00:05:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:06:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://876608ba51311837a365ee7b851b0712cc3909657a27a5f9e555e2f38854bb5e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.708: INFO: Pod "webserver-deployment-595b5b9587-x8gq7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x8gq7 webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-x8gq7 042c22fa-db15-45fd-8d8f-52a07cdd4c8d 2292818 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aef697 0xc003aef698}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.709: INFO: Pod "webserver-deployment-595b5b9587-zhf6h" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zhf6h webserver-deployment-595b5b9587- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-595b5b9587-zhf6h 16e82ef6-b7e5-4eb4-806f-7edaab5137bb 2292587 0 2020-08-22 00:05:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 13152194-2803-491f-b800-988f8719190d 0xc003aef7c7 0xc003aef7c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:05:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.245,StartTime:2020-08-22 00:05:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:06:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a948d1762d47181d408208ba3689196388415c0aff9e42e27cadce059cf7d5d5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.709: INFO: Pod "webserver-deployment-c7997dcc8-49c78" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-49c78 webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-49c78 08ce9364-3a36-439d-9d8a-649ce295462f 2292810 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003aef947 0xc003aef948}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.709: INFO: Pod "webserver-deployment-c7997dcc8-6pdg9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6pdg9 webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-6pdg9 02d83c11-8aa2-4690-bb5f-c85e2cc09797 2292727 0 2020-08-22 00:06:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003aefa77 0xc003aefa78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 00:06:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.709: INFO: Pod "webserver-deployment-c7997dcc8-6zgp6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6zgp6 webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-6zgp6 98dbcac9-22dd-4282-81e1-72d9cef7e38b 2292739 0 2020-08-22 00:06:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003aefbf7 0xc003aefbf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 00:06:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.709: INFO: Pod "webserver-deployment-c7997dcc8-8b2xb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8b2xb webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-8b2xb b767f487-7c1d-4fbe-a377-03472408610b 2292808 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003aefd77 0xc003aefd78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.710: INFO: Pod "webserver-deployment-c7997dcc8-9v5qx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9v5qx webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-9v5qx be4acee5-2f7f-4f1d-884b-013a4aee544d 2292879 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003aefea7 0xc003aefea8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 00:06:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.710: INFO: Pod "webserver-deployment-c7997dcc8-dc6p2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dc6p2 webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-dc6p2 4d232069-c55d-41a5-ac3d-c5006a36dffe 2292844 0 2020-08-22 00:06:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003812027 0xc003812028}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.249,StartTime:2020-08-22 00:06:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.710: INFO: Pod "webserver-deployment-c7997dcc8-hpgpj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hpgpj webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-hpgpj 94456a2f-af31-4975-9a58-571102855026 2292750 0 2020-08-22 00:06:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc0038121e7 0xc0038121e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 00:06:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.710: INFO: Pod "webserver-deployment-c7997dcc8-jgpdc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jgpdc webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-jgpdc 983c2ee7-4bf0-40fd-bb65-efc73f034ef8 2292853 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003812387 0xc003812388}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 00:06:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.710: INFO: Pod "webserver-deployment-c7997dcc8-p9nmn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p9nmn webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-p9nmn 85d1993d-8866-45e3-aec7-c4b56378dace 2292886 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003812507 0xc003812508}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 00:06:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.710: INFO: Pod "webserver-deployment-c7997dcc8-q25c8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q25c8 webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-q25c8 8d6bf94c-3472-490a-a7a5-7312e0ce893c 2292824 0 2020-08-22 00:06:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003812687 0xc003812688}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.710: INFO: Pod "webserver-deployment-c7997dcc8-sggt4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sggt4 webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-sggt4 1a3e670a-eecd-4fbd-b7db-d315590b4670 2292753 0 2020-08-22 00:06:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc0038127c7 0xc0038127c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-22 00:06:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.711: INFO: Pod "webserver-deployment-c7997dcc8-v9vxg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v9vxg webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-v9vxg c30361ca-c48a-447d-9920-35b505aba9e3 2292838 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003812947 0xc003812948}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 00:06:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 22 00:06:15.711: INFO: Pod "webserver-deployment-c7997dcc8-wlqlr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wlqlr webserver-deployment-c7997dcc8- deployment-175 /api/v1/namespaces/deployment-175/pods/webserver-deployment-c7997dcc8-wlqlr 6d6e0092-fbb3-4834-b437-e37a360775b2 2292814 0 2020-08-22 00:06:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a57ef75-d0d8-4320-9c52-8014fc66d38f 0xc003812ae7 0xc003812ae8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2zmg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2zmg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2zmg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:06:15.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-175" for this suite.

• [SLOW TEST:23.481 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":182,"skipped":3061,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:06:16.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Aug 22 00:06:17.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8556 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 22 00:06:31.580: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0822 00:06:31.494425    3735 log.go:172] (0xc00011b340) (0xc0006fa140) Create stream\nI0822 00:06:31.494526    3735 log.go:172] (0xc00011b340) (0xc0006fa140) Stream added, broadcasting: 1\nI0822 00:06:31.498204    3735 log.go:172] (0xc00011b340) Reply frame received for 1\nI0822 00:06:31.498251    3735 log.go:172] (0xc00011b340) (0xc0006fa1e0) Create stream\nI0822 00:06:31.498266    3735 log.go:172] (0xc00011b340) (0xc0006fa1e0) Stream added, broadcasting: 3\nI0822 00:06:31.499257    3735 log.go:172] (0xc00011b340) Reply frame received for 3\nI0822 00:06:31.499299    3735 log.go:172] (0xc00011b340) (0xc0006fa280) Create stream\nI0822 00:06:31.499316    3735 log.go:172] (0xc00011b340) (0xc0006fa280) Stream added, broadcasting: 5\nI0822 00:06:31.500255    3735 log.go:172] (0xc00011b340) Reply frame received for 5\nI0822 00:06:31.500297    3735 log.go:172] (0xc00011b340) (0xc00072c000) Create stream\nI0822 00:06:31.500312    3735 log.go:172] (0xc00011b340) (0xc00072c000) Stream added, broadcasting: 7\nI0822 00:06:31.501148    3735 log.go:172] (0xc00011b340) Reply frame received for 7\nI0822 00:06:31.501303    3735 log.go:172] (0xc0006fa1e0) (3) Writing data frame\nI0822 00:06:31.501371    3735 log.go:172] (0xc0006fa1e0) (3) Writing data frame\nI0822 00:06:31.502221    3735 log.go:172] (0xc00011b340) Data frame received for 5\nI0822 00:06:31.502240    3735 log.go:172] (0xc0006fa280) (5) Data frame handling\nI0822 00:06:31.502258    3735 log.go:172] (0xc0006fa280) (5) Data frame sent\nI0822 00:06:31.503019    3735 log.go:172] (0xc00011b340) Data frame received for 5\nI0822 00:06:31.503029    3735 log.go:172] (0xc0006fa280) (5) Data frame handling\nI0822 00:06:31.503034    3735 log.go:172] (0xc0006fa280) (5) Data frame sent\nI0822 00:06:31.546516    3735 log.go:172] (0xc00011b340) Data frame received for 1\nI0822 00:06:31.546553    3735 log.go:172] (0xc0006fa140) (1) Data frame handling\nI0822 00:06:31.546574    3735 log.go:172] (0xc0006fa140) (1) Data frame sent\nI0822 00:06:31.546608    3735 log.go:172] (0xc00011b340) Data frame received for 5\nI0822 00:06:31.546624    3735 log.go:172] (0xc0006fa280) (5) Data frame handling\nI0822 00:06:31.546654    3735 log.go:172] (0xc00011b340) Data frame received for 7\nI0822 00:06:31.546671    3735 log.go:172] (0xc00072c000) (7) Data frame handling\nI0822 00:06:31.547044    3735 log.go:172] (0xc00011b340) (0xc0006fa140) Stream removed, broadcasting: 1\nI0822 00:06:31.547123    3735 log.go:172] (0xc00011b340) (0xc0006fa1e0) Stream removed, broadcasting: 3\nI0822 00:06:31.547154    3735 log.go:172] (0xc00011b340) Go away received\nI0822 00:06:31.547553    3735 log.go:172] (0xc00011b340) (0xc0006fa140) Stream removed, broadcasting: 1\nI0822 00:06:31.547580    3735 log.go:172] (0xc00011b340) (0xc0006fa1e0) Stream removed, broadcasting: 3\nI0822 00:06:31.547598    3735 log.go:172] (0xc00011b340) (0xc0006fa280) Stream removed, broadcasting: 5\nI0822 00:06:31.547615    3735 log.go:172] (0xc00011b340) (0xc00072c000) Stream removed, broadcasting: 7\n"
Aug 22 00:06:31.581: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:06:34.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8556" for this suite.

• [SLOW TEST:17.813 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":183,"skipped":3084,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:06:34.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 22 00:06:35.233: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 22 00:06:35.255: INFO: Waiting for terminating namespaces to be deleted...
Aug 22 00:06:35.257: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 22 00:06:35.266: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:06:35.266: INFO: 	Container app ready: true, restart count 0
Aug 22 00:06:35.266: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:06:35.266: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 00:06:35.266: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:06:35.266: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:06:35.266: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 22 00:06:35.510: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:06:35.511: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:06:35.511: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:06:35.511: INFO: 	Container app ready: true, restart count 0
Aug 22 00:06:35.511: INFO: e2e-test-httpd-deployment-594dddd44f-zq6z5 from kubectl-726 started at 2020-08-22 00:04:58 +0000 UTC (1 container statuses recorded)
Aug 22 00:06:35.511: INFO: 	Container e2e-test-httpd-deployment ready: false, restart count 0
Aug 22 00:06:35.511: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:06:35.511: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Aug 22 00:06:35.828: INFO: Pod daemon-set-4l8wc requesting resource cpu=0m on Node jerma-worker
Aug 22 00:06:35.828: INFO: Pod daemon-set-cxv46 requesting resource cpu=0m on Node jerma-worker2
Aug 22 00:06:35.828: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2
Aug 22 00:06:35.828: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker
Aug 22 00:06:35.828: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2
Aug 22 00:06:35.828: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker
Aug 22 00:06:35.828: INFO: Pod e2e-test-httpd-deployment-594dddd44f-zq6z5 requesting resource cpu=0m on Node jerma-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Aug 22 00:06:35.828: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Aug 22 00:06:35.831: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ca552ce3-0d03-47d4-b21f-2a62a1c6af5e.162d6e5ec89c22d5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3734/filler-pod-ca552ce3-0d03-47d4-b21f-2a62a1c6af5e to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ca552ce3-0d03-47d4-b21f-2a62a1c6af5e.162d6e5f4e45a619], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ca552ce3-0d03-47d4-b21f-2a62a1c6af5e.162d6e5fe02337fa], Reason = [Created], Message = [Created container filler-pod-ca552ce3-0d03-47d4-b21f-2a62a1c6af5e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ca552ce3-0d03-47d4-b21f-2a62a1c6af5e.162d6e5ff99b6ae3], Reason = [Started], Message = [Started container filler-pod-ca552ce3-0d03-47d4-b21f-2a62a1c6af5e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ffef3f49-465e-49de-bc96-b3789093fc9e.162d6e5ec8d4c7ea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3734/filler-pod-ffef3f49-465e-49de-bc96-b3789093fc9e to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ffef3f49-465e-49de-bc96-b3789093fc9e.162d6e5f4f27b3d9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ffef3f49-465e-49de-bc96-b3789093fc9e.162d6e5ff492fbdf], Reason = [Created], Message = [Created container filler-pod-ffef3f49-465e-49de-bc96-b3789093fc9e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ffef3f49-465e-49de-bc96-b3789093fc9e.162d6e60186de27b], Reason = [Started], Message = [Started container filler-pod-ffef3f49-465e-49de-bc96-b3789093fc9e]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162d6e60a6d789b1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:06:45.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3734" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:11.146 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":184,"skipped":3106,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:06:45.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 22 00:06:53.082: INFO: Successfully updated pod "pod-update-activedeadlineseconds-67e70fdc-028a-41f5-a62e-faaa79364113"
Aug 22 00:06:53.082: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-67e70fdc-028a-41f5-a62e-faaa79364113" in namespace "pods-5990" to be "terminated due to deadline exceeded"
Aug 22 00:06:53.089: INFO: Pod "pod-update-activedeadlineseconds-67e70fdc-028a-41f5-a62e-faaa79364113": Phase="Running", Reason="", readiness=true. Elapsed: 6.313206ms
Aug 22 00:06:55.092: INFO: Pod "pod-update-activedeadlineseconds-67e70fdc-028a-41f5-a62e-faaa79364113": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009661874s
Aug 22 00:06:55.092: INFO: Pod "pod-update-activedeadlineseconds-67e70fdc-028a-41f5-a62e-faaa79364113" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:06:55.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5990" for this suite.

• [SLOW TEST:9.367 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3151,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:06:55.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 00:06:55.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2628228-ded3-4a50-bb9c-58ba168e0557" in namespace "projected-8943" to be "success or failure"
Aug 22 00:06:55.275: INFO: Pod "downwardapi-volume-a2628228-ded3-4a50-bb9c-58ba168e0557": Phase="Pending", Reason="", readiness=false. Elapsed: 15.94579ms
Aug 22 00:06:57.279: INFO: Pod "downwardapi-volume-a2628228-ded3-4a50-bb9c-58ba168e0557": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020182072s
Aug 22 00:06:59.283: INFO: Pod "downwardapi-volume-a2628228-ded3-4a50-bb9c-58ba168e0557": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023929047s
STEP: Saw pod success
Aug 22 00:06:59.283: INFO: Pod "downwardapi-volume-a2628228-ded3-4a50-bb9c-58ba168e0557" satisfied condition "success or failure"
Aug 22 00:06:59.286: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a2628228-ded3-4a50-bb9c-58ba168e0557 container client-container: 
STEP: delete the pod
Aug 22 00:06:59.344: INFO: Waiting for pod downwardapi-volume-a2628228-ded3-4a50-bb9c-58ba168e0557 to disappear
Aug 22 00:06:59.352: INFO: Pod downwardapi-volume-a2628228-ded3-4a50-bb9c-58ba168e0557 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:06:59.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8943" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3157,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:06:59.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 22 00:06:59.426: INFO: Waiting up to 5m0s for pod "pod-4188faf5-b804-42ec-9344-4423f6d4235a" in namespace "emptydir-6659" to be "success or failure"
Aug 22 00:06:59.474: INFO: Pod "pod-4188faf5-b804-42ec-9344-4423f6d4235a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.696747ms
Aug 22 00:07:01.477: INFO: Pod "pod-4188faf5-b804-42ec-9344-4423f6d4235a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05102464s
Aug 22 00:07:03.480: INFO: Pod "pod-4188faf5-b804-42ec-9344-4423f6d4235a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054472838s
STEP: Saw pod success
Aug 22 00:07:03.480: INFO: Pod "pod-4188faf5-b804-42ec-9344-4423f6d4235a" satisfied condition "success or failure"
Aug 22 00:07:03.483: INFO: Trying to get logs from node jerma-worker pod pod-4188faf5-b804-42ec-9344-4423f6d4235a container test-container: 
STEP: delete the pod
Aug 22 00:07:03.512: INFO: Waiting for pod pod-4188faf5-b804-42ec-9344-4423f6d4235a to disappear
Aug 22 00:07:03.524: INFO: Pod pod-4188faf5-b804-42ec-9344-4423f6d4235a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:07:03.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6659" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3192,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:07:03.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 22 00:07:03.597: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 22 00:07:03.623: INFO: Waiting for terminating namespaces to be deleted...
Aug 22 00:07:03.625: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 22 00:07:03.630: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:07:03.630: INFO: 	Container app ready: true, restart count 0
Aug 22 00:07:03.630: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:07:03.630: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 00:07:03.630: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:07:03.630: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:07:03.630: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 22 00:07:03.636: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:07:03.636: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 00:07:03.636: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:07:03.636: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:07:03.636: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:07:03.636: INFO: 	Container app ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b41e034f-1054-4ca3-8698-125453f1bea6 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-b41e034f-1054-4ca3-8698-125453f1bea6 off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b41e034f-1054-4ca3-8698-125453f1bea6
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:07:11.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7245" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:8.242 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":188,"skipped":3213,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:07:11.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 22 00:07:11.871: INFO: Waiting up to 5m0s for pod "downward-api-cb571e72-bb6b-4a86-85bc-f388fb248ad0" in namespace "downward-api-2428" to be "success or failure"
Aug 22 00:07:11.886: INFO: Pod "downward-api-cb571e72-bb6b-4a86-85bc-f388fb248ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.659646ms
Aug 22 00:07:13.890: INFO: Pod "downward-api-cb571e72-bb6b-4a86-85bc-f388fb248ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018669948s
Aug 22 00:07:15.894: INFO: Pod "downward-api-cb571e72-bb6b-4a86-85bc-f388fb248ad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02291568s
STEP: Saw pod success
Aug 22 00:07:15.894: INFO: Pod "downward-api-cb571e72-bb6b-4a86-85bc-f388fb248ad0" satisfied condition "success or failure"
Aug 22 00:07:15.897: INFO: Trying to get logs from node jerma-worker pod downward-api-cb571e72-bb6b-4a86-85bc-f388fb248ad0 container dapi-container: 
STEP: delete the pod
Aug 22 00:07:15.934: INFO: Waiting for pod downward-api-cb571e72-bb6b-4a86-85bc-f388fb248ad0 to disappear
Aug 22 00:07:15.964: INFO: Pod downward-api-cb571e72-bb6b-4a86-85bc-f388fb248ad0 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:07:15.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2428" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3250,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:07:15.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-f7ncj in namespace proxy-9152
I0822 00:07:16.060091       6 runners.go:189] Created replication controller with name: proxy-service-f7ncj, namespace: proxy-9152, replica count: 1
I0822 00:07:17.110634       6 runners.go:189] proxy-service-f7ncj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 00:07:18.110862       6 runners.go:189] proxy-service-f7ncj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 00:07:19.111156       6 runners.go:189] proxy-service-f7ncj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 00:07:20.111391       6 runners.go:189] proxy-service-f7ncj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0822 00:07:21.111638       6 runners.go:189] proxy-service-f7ncj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0822 00:07:22.111836       6 runners.go:189] proxy-service-f7ncj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0822 00:07:23.112048       6 runners.go:189] proxy-service-f7ncj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0822 00:07:24.112290       6 runners.go:189] proxy-service-f7ncj Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 22 00:07:24.115: INFO: setup took 8.091004062s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 22 00:07:24.123: INFO: (0) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 7.303556ms)
Aug 22 00:07:24.124: INFO: (0) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 7.994568ms)
Aug 22 00:07:24.124: INFO: (0) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 8.442327ms)
Aug 22 00:07:24.125: INFO: (0) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 9.173057ms)
Aug 22 00:07:24.125: INFO: (0) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 8.994695ms)
Aug 22 00:07:24.125: INFO: (0) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 8.967943ms)
Aug 22 00:07:24.125: INFO: (0) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 9.42733ms)
Aug 22 00:07:24.127: INFO: (0) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 11.031078ms)
Aug 22 00:07:24.127: INFO: (0) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 11.015676ms)
Aug 22 00:07:24.127: INFO: (0) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 11.785838ms)
Aug 22 00:07:24.128: INFO: (0) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 11.913622ms)
Aug 22 00:07:24.130: INFO: (0) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test<... (200; 3.759426ms)
Aug 22 00:07:24.135: INFO: (1) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 3.76887ms)
Aug 22 00:07:24.135: INFO: (1) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 4.000536ms)
Aug 22 00:07:24.136: INFO: (1) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 4.1267ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 5.526969ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 5.860365ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 5.954488ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 5.792404ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 5.871493ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 5.845696ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 5.864703ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 5.895608ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 5.848335ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 5.935635ms)
Aug 22 00:07:24.137: INFO: (1) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 6.020266ms)
Aug 22 00:07:24.141: INFO: (2) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 3.468063ms)
Aug 22 00:07:24.141: INFO: (2) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 3.424935ms)
Aug 22 00:07:24.141: INFO: (2) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: ... (200; 7.55051ms)
Aug 22 00:07:24.146: INFO: (2) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 8.859659ms)
Aug 22 00:07:24.146: INFO: (2) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 8.796ms)
Aug 22 00:07:24.146: INFO: (2) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 8.914952ms)
Aug 22 00:07:24.146: INFO: (2) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 8.894153ms)
Aug 22 00:07:24.146: INFO: (2) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 8.855386ms)
Aug 22 00:07:24.147: INFO: (2) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 9.079574ms)
Aug 22 00:07:24.147: INFO: (2) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 9.372215ms)
Aug 22 00:07:24.147: INFO: (2) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 9.466872ms)
Aug 22 00:07:24.147: INFO: (2) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 9.497516ms)
Aug 22 00:07:24.147: INFO: (2) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 9.69364ms)
Aug 22 00:07:24.147: INFO: (2) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 9.759484ms)
Aug 22 00:07:24.147: INFO: (2) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 9.792568ms)
Aug 22 00:07:24.150: INFO: (3) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 2.639933ms)
Aug 22 00:07:24.150: INFO: (3) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test<... (200; 4.001446ms)
Aug 22 00:07:24.151: INFO: (3) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 4.018535ms)
Aug 22 00:07:24.152: INFO: (3) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 4.606694ms)
Aug 22 00:07:24.152: INFO: (3) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 4.648181ms)
Aug 22 00:07:24.152: INFO: (3) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 4.609231ms)
Aug 22 00:07:24.152: INFO: (3) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 4.659526ms)
Aug 22 00:07:24.152: INFO: (3) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 4.641646ms)
Aug 22 00:07:24.154: INFO: (3) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 6.115343ms)
Aug 22 00:07:24.154: INFO: (3) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 6.101143ms)
Aug 22 00:07:24.154: INFO: (3) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 6.219579ms)
Aug 22 00:07:24.154: INFO: (3) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 6.305726ms)
Aug 22 00:07:24.154: INFO: (3) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 6.240553ms)
Aug 22 00:07:24.154: INFO: (3) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 6.236537ms)
Aug 22 00:07:24.156: INFO: (4) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: ... (200; 1.913436ms)
Aug 22 00:07:24.156: INFO: (4) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 2.338798ms)
Aug 22 00:07:24.157: INFO: (4) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.232762ms)
Aug 22 00:07:24.157: INFO: (4) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 3.243349ms)
Aug 22 00:07:24.157: INFO: (4) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 3.276224ms)
Aug 22 00:07:24.157: INFO: (4) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 3.397759ms)
Aug 22 00:07:24.157: INFO: (4) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.329579ms)
Aug 22 00:07:24.157: INFO: (4) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.445753ms)
Aug 22 00:07:24.157: INFO: (4) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.494774ms)
Aug 22 00:07:24.157: INFO: (4) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 3.67564ms)
Aug 22 00:07:24.158: INFO: (4) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 4.148979ms)
Aug 22 00:07:24.158: INFO: (4) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 4.247561ms)
Aug 22 00:07:24.158: INFO: (4) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 4.328014ms)
Aug 22 00:07:24.158: INFO: (4) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 4.429488ms)
Aug 22 00:07:24.158: INFO: (4) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 4.415272ms)
Aug 22 00:07:24.162: INFO: (5) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 3.393963ms)
Aug 22 00:07:24.162: INFO: (5) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 3.693256ms)
Aug 22 00:07:24.163: INFO: (5) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 4.539788ms)
Aug 22 00:07:24.163: INFO: (5) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 4.659182ms)
Aug 22 00:07:24.163: INFO: (5) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 4.992719ms)
Aug 22 00:07:24.163: INFO: (5) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 5.03057ms)
Aug 22 00:07:24.163: INFO: (5) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test<... (200; 5.114064ms)
Aug 22 00:07:24.164: INFO: (5) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 5.169242ms)
Aug 22 00:07:24.164: INFO: (5) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 5.119048ms)
Aug 22 00:07:24.164: INFO: (5) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 5.189948ms)
Aug 22 00:07:24.164: INFO: (5) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 5.177158ms)
Aug 22 00:07:24.164: INFO: (5) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 5.2257ms)
Aug 22 00:07:24.164: INFO: (5) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 5.152901ms)
Aug 22 00:07:24.164: INFO: (5) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 5.162139ms)
Aug 22 00:07:24.166: INFO: (6) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 2.597715ms)
Aug 22 00:07:24.166: INFO: (6) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 2.631906ms)
Aug 22 00:07:24.167: INFO: (6) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 3.309823ms)
Aug 22 00:07:24.167: INFO: (6) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.507427ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 3.966246ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 4.00128ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 4.043561ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 4.05546ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 4.047061ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 4.123136ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 4.102298ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 4.117343ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 4.14431ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 4.180236ms)
Aug 22 00:07:24.168: INFO: (6) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test<... (200; 3.149918ms)
Aug 22 00:07:24.171: INFO: (7) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 3.372746ms)
Aug 22 00:07:24.171: INFO: (7) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.381372ms)
Aug 22 00:07:24.172: INFO: (7) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 3.709302ms)
Aug 22 00:07:24.172: INFO: (7) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test (200; 3.854597ms)
Aug 22 00:07:24.172: INFO: (7) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 3.977897ms)
Aug 22 00:07:24.172: INFO: (7) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 3.905468ms)
Aug 22 00:07:24.172: INFO: (7) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 3.933578ms)
Aug 22 00:07:24.172: INFO: (7) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 3.858669ms)
Aug 22 00:07:24.173: INFO: (7) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 4.865559ms)
Aug 22 00:07:24.173: INFO: (7) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 4.875622ms)
Aug 22 00:07:24.175: INFO: (8) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 2.400936ms)
Aug 22 00:07:24.176: INFO: (8) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 3.201079ms)
Aug 22 00:07:24.176: INFO: (8) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.14729ms)
Aug 22 00:07:24.176: INFO: (8) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.153094ms)
Aug 22 00:07:24.176: INFO: (8) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: ... (200; 3.194983ms)
Aug 22 00:07:24.176: INFO: (8) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.559952ms)
Aug 22 00:07:24.177: INFO: (8) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.725458ms)
Aug 22 00:07:24.177: INFO: (8) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 3.962779ms)
Aug 22 00:07:24.177: INFO: (8) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 3.988043ms)
Aug 22 00:07:24.177: INFO: (8) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 4.029051ms)
Aug 22 00:07:24.177: INFO: (8) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 4.0261ms)
Aug 22 00:07:24.177: INFO: (8) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 3.991851ms)
Aug 22 00:07:24.177: INFO: (8) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 4.272434ms)
Aug 22 00:07:24.181: INFO: (9) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.504387ms)
Aug 22 00:07:24.181: INFO: (9) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 3.549367ms)
Aug 22 00:07:24.181: INFO: (9) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.752078ms)
Aug 22 00:07:24.181: INFO: (9) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: ... (200; 3.908447ms)
Aug 22 00:07:24.181: INFO: (9) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 3.922138ms)
Aug 22 00:07:24.181: INFO: (9) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 4.050173ms)
Aug 22 00:07:24.182: INFO: (9) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 4.264985ms)
Aug 22 00:07:24.182: INFO: (9) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 4.250966ms)
Aug 22 00:07:24.182: INFO: (9) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 4.209981ms)
Aug 22 00:07:24.183: INFO: (10) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 1.737787ms)
Aug 22 00:07:24.184: INFO: (10) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 2.762373ms)
Aug 22 00:07:24.185: INFO: (10) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 3.190703ms)
Aug 22 00:07:24.185: INFO: (10) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.373959ms)
Aug 22 00:07:24.185: INFO: (10) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.339589ms)
Aug 22 00:07:24.185: INFO: (10) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 3.56652ms)
Aug 22 00:07:24.185: INFO: (10) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.582267ms)
Aug 22 00:07:24.185: INFO: (10) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 3.632519ms)
Aug 22 00:07:24.185: INFO: (10) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 3.672851ms)
Aug 22 00:07:24.185: INFO: (10) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 3.712762ms)
Aug 22 00:07:24.186: INFO: (10) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 3.845928ms)
Aug 22 00:07:24.186: INFO: (10) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test (200; 3.022368ms)
Aug 22 00:07:24.189: INFO: (11) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 2.983694ms)
Aug 22 00:07:24.189: INFO: (11) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 3.37849ms)
Aug 22 00:07:24.189: INFO: (11) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 3.083162ms)
Aug 22 00:07:24.189: INFO: (11) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test (200; 2.139797ms)
Aug 22 00:07:24.193: INFO: (12) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 2.360498ms)
Aug 22 00:07:24.195: INFO: (12) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 4.684601ms)
Aug 22 00:07:24.195: INFO: (12) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: ... (200; 4.848276ms)
Aug 22 00:07:24.195: INFO: (12) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 4.842832ms)
Aug 22 00:07:24.195: INFO: (12) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 4.836211ms)
Aug 22 00:07:24.195: INFO: (12) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 4.835471ms)
Aug 22 00:07:24.195: INFO: (12) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 4.839718ms)
Aug 22 00:07:24.196: INFO: (12) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 4.932961ms)
Aug 22 00:07:24.196: INFO: (12) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 4.954788ms)
Aug 22 00:07:24.196: INFO: (12) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 4.994432ms)
Aug 22 00:07:24.196: INFO: (12) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 4.979498ms)
Aug 22 00:07:24.196: INFO: (12) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 5.729963ms)
Aug 22 00:07:24.202: INFO: (13) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 5.91132ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 6.2168ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 6.33987ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 6.215716ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 6.528881ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 6.561764ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 6.625302ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 6.635908ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 6.62546ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 6.778584ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 6.953363ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 7.035144ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 6.985742ms)
Aug 22 00:07:24.203: INFO: (13) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: ... (200; 4.289722ms)
Aug 22 00:07:24.208: INFO: (14) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 4.259058ms)
Aug 22 00:07:24.208: INFO: (14) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 4.440245ms)
Aug 22 00:07:24.208: INFO: (14) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 4.321167ms)
Aug 22 00:07:24.208: INFO: (14) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 4.395849ms)
Aug 22 00:07:24.208: INFO: (14) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 4.402211ms)
Aug 22 00:07:24.208: INFO: (14) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 4.412638ms)
Aug 22 00:07:24.208: INFO: (14) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: ... (200; 2.130939ms)
Aug 22 00:07:24.212: INFO: (15) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 2.419694ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 4.021753ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 4.028929ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 4.075829ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 4.095692ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 4.033968ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 4.147859ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 4.086615ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 4.091299ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test (200; 4.214242ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 4.262124ms)
Aug 22 00:07:24.214: INFO: (15) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 4.284328ms)
Aug 22 00:07:24.218: INFO: (16) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 4.268548ms)
Aug 22 00:07:24.219: INFO: (16) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 4.962343ms)
Aug 22 00:07:24.224: INFO: (16) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 9.826619ms)
Aug 22 00:07:24.224: INFO: (16) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 9.843616ms)
Aug 22 00:07:24.224: INFO: (16) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 9.851903ms)
Aug 22 00:07:24.224: INFO: (16) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test (200; 10.060163ms)
Aug 22 00:07:24.224: INFO: (16) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 10.119288ms)
Aug 22 00:07:24.224: INFO: (16) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 10.053985ms)
Aug 22 00:07:24.227: INFO: (17) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 2.492714ms)
Aug 22 00:07:24.227: INFO: (17) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 2.512373ms)
Aug 22 00:07:24.227: INFO: (17) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 3.316112ms)
Aug 22 00:07:24.228: INFO: (17) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 3.678866ms)
Aug 22 00:07:24.228: INFO: (17) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.594787ms)
Aug 22 00:07:24.228: INFO: (17) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test (200; 4.249324ms)
Aug 22 00:07:24.228: INFO: (17) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 4.341716ms)
Aug 22 00:07:24.228: INFO: (17) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 4.33654ms)
Aug 22 00:07:24.228: INFO: (17) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 4.318559ms)
Aug 22 00:07:24.228: INFO: (17) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 4.3251ms)
Aug 22 00:07:24.228: INFO: (17) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 4.327104ms)
Aug 22 00:07:24.228: INFO: (17) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 4.376013ms)
Aug 22 00:07:24.229: INFO: (17) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 4.712403ms)
Aug 22 00:07:24.229: INFO: (17) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 4.71406ms)
Aug 22 00:07:24.229: INFO: (17) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 4.924196ms)
Aug 22 00:07:24.232: INFO: (18) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 2.782607ms)
Aug 22 00:07:24.232: INFO: (18) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 2.697873ms)
Aug 22 00:07:24.232: INFO: (18) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 2.833602ms)
Aug 22 00:07:24.232: INFO: (18) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 2.961359ms)
Aug 22 00:07:24.233: INFO: (18) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:1080/proxy/: ... (200; 3.662521ms)
Aug 22 00:07:24.233: INFO: (18) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.878453ms)
Aug 22 00:07:24.233: INFO: (18) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.876418ms)
Aug 22 00:07:24.233: INFO: (18) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8/proxy/: test (200; 3.864044ms)
Aug 22 00:07:24.233: INFO: (18) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 3.892461ms)
Aug 22 00:07:24.233: INFO: (18) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 3.920282ms)
Aug 22 00:07:24.233: INFO: (18) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 3.988825ms)
Aug 22 00:07:24.233: INFO: (18) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: test (200; 2.547591ms)
Aug 22 00:07:24.236: INFO: (19) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:460/proxy/: tls baz (200; 2.751124ms)
Aug 22 00:07:24.236: INFO: (19) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:443/proxy/: ... (200; 2.805596ms)
Aug 22 00:07:24.237: INFO: (19) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.433756ms)
Aug 22 00:07:24.237: INFO: (19) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:1080/proxy/: test<... (200; 3.508149ms)
Aug 22 00:07:24.237: INFO: (19) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.787996ms)
Aug 22 00:07:24.237: INFO: (19) /api/v1/namespaces/proxy-9152/pods/https:proxy-service-f7ncj-vtmg8:462/proxy/: tls qux (200; 3.750302ms)
Aug 22 00:07:24.237: INFO: (19) /api/v1/namespaces/proxy-9152/pods/proxy-service-f7ncj-vtmg8:162/proxy/: bar (200; 3.780584ms)
Aug 22 00:07:24.237: INFO: (19) /api/v1/namespaces/proxy-9152/pods/http:proxy-service-f7ncj-vtmg8:160/proxy/: foo (200; 3.884489ms)
Aug 22 00:07:24.238: INFO: (19) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname1/proxy/: tls baz (200; 4.556324ms)
Aug 22 00:07:24.238: INFO: (19) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname1/proxy/: foo (200; 4.671045ms)
Aug 22 00:07:24.238: INFO: (19) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname1/proxy/: foo (200; 4.912022ms)
Aug 22 00:07:24.238: INFO: (19) /api/v1/namespaces/proxy-9152/services/proxy-service-f7ncj:portname2/proxy/: bar (200; 5.085157ms)
Aug 22 00:07:24.238: INFO: (19) /api/v1/namespaces/proxy-9152/services/https:proxy-service-f7ncj:tlsportname2/proxy/: tls qux (200; 5.069081ms)
Aug 22 00:07:24.238: INFO: (19) /api/v1/namespaces/proxy-9152/services/http:proxy-service-f7ncj:portname2/proxy/: bar (200; 5.025035ms)
STEP: deleting ReplicationController proxy-service-f7ncj in namespace proxy-9152, will wait for the garbage collector to delete the pods
Aug 22 00:07:24.301: INFO: Deleting ReplicationController proxy-service-f7ncj took: 11.134937ms
Aug 22 00:07:24.601: INFO: Terminating ReplicationController proxy-service-f7ncj pods took: 300.221965ms
[AfterEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:07:31.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9152" for this suite.

• [SLOW TEST:15.915 seconds]
[sig-network] Proxy
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":190,"skipped":3261,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:07:31.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 22 00:07:31.947: INFO: Waiting up to 5m0s for pod "downward-api-33b6bb27-f8b2-48ea-ae08-229a3bf4c4b3" in namespace "downward-api-8290" to be "success or failure"
Aug 22 00:07:31.968: INFO: Pod "downward-api-33b6bb27-f8b2-48ea-ae08-229a3bf4c4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.349476ms
Aug 22 00:07:33.972: INFO: Pod "downward-api-33b6bb27-f8b2-48ea-ae08-229a3bf4c4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02510635s
Aug 22 00:07:36.097: INFO: Pod "downward-api-33b6bb27-f8b2-48ea-ae08-229a3bf4c4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150293029s
Aug 22 00:07:38.220: INFO: Pod "downward-api-33b6bb27-f8b2-48ea-ae08-229a3bf4c4b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.273593907s
STEP: Saw pod success
Aug 22 00:07:38.220: INFO: Pod "downward-api-33b6bb27-f8b2-48ea-ae08-229a3bf4c4b3" satisfied condition "success or failure"
Aug 22 00:07:38.223: INFO: Trying to get logs from node jerma-worker2 pod downward-api-33b6bb27-f8b2-48ea-ae08-229a3bf4c4b3 container dapi-container: 
STEP: delete the pod
Aug 22 00:07:38.521: INFO: Waiting for pod downward-api-33b6bb27-f8b2-48ea-ae08-229a3bf4c4b3 to disappear
Aug 22 00:07:38.573: INFO: Pod downward-api-33b6bb27-f8b2-48ea-ae08-229a3bf4c4b3 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:07:38.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8290" for this suite.

• [SLOW TEST:6.694 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3321,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:07:38.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 22 00:07:51.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 00:07:51.433: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 00:07:53.433: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 00:07:53.436: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 22 00:07:55.433: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 22 00:07:55.437: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:07:55.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6091" for this suite.

• [SLOW TEST:16.863 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3323,"failed":0}
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:07:55.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Aug 22 00:07:55.885: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9944" to be "success or failure"
Aug 22 00:07:55.983: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 98.586436ms
Aug 22 00:07:58.055: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170458144s
Aug 22 00:08:00.059: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174606039s
Aug 22 00:08:02.063: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.178447469s
STEP: Saw pod success
Aug 22 00:08:02.063: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 22 00:08:02.355: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 22 00:08:02.627: INFO: Waiting for pod pod-host-path-test to disappear
Aug 22 00:08:02.676: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:08:02.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9944" for this suite.

• [SLOW TEST:7.425 seconds]
[sig-storage] HostPath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3324,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:08:02.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Aug 22 00:08:03.488: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 22 00:08:03.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4665'
Aug 22 00:08:04.146: INFO: stderr: ""
Aug 22 00:08:04.146: INFO: stdout: "service/agnhost-slave created\n"
Aug 22 00:08:04.147: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 22 00:08:04.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4665'
Aug 22 00:08:05.117: INFO: stderr: ""
Aug 22 00:08:05.117: INFO: stdout: "service/agnhost-master created\n"
Aug 22 00:08:05.117: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 22 00:08:05.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4665'
Aug 22 00:08:05.510: INFO: stderr: ""
Aug 22 00:08:05.510: INFO: stdout: "service/frontend created\n"
Aug 22 00:08:05.510: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 22 00:08:05.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4665'
Aug 22 00:08:05.768: INFO: stderr: ""
Aug 22 00:08:05.768: INFO: stdout: "deployment.apps/frontend created\n"
Aug 22 00:08:05.768: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 22 00:08:05.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4665'
Aug 22 00:08:06.151: INFO: stderr: ""
Aug 22 00:08:06.151: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 22 00:08:06.152: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 22 00:08:06.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4665'
Aug 22 00:08:06.451: INFO: stderr: ""
Aug 22 00:08:06.451: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 22 00:08:06.451: INFO: Waiting for all frontend pods to be Running.
Aug 22 00:08:16.502: INFO: Waiting for frontend to serve content.
Aug 22 00:08:16.513: INFO: Trying to add a new entry to the guestbook.
Aug 22 00:08:16.523: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 22 00:08:16.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4665'
Aug 22 00:08:16.730: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 00:08:16.730: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 00:08:16.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4665'
Aug 22 00:08:16.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 00:08:16.898: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 00:08:16.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4665'
Aug 22 00:08:17.018: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 00:08:17.018: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 00:08:17.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4665'
Aug 22 00:08:17.122: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 00:08:17.122: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 00:08:17.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4665'
Aug 22 00:08:17.231: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 00:08:17.231: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 22 00:08:17.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4665'
Aug 22 00:08:17.324: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 22 00:08:17.324: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:08:17.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4665" for this suite.

• [SLOW TEST:14.460 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":194,"skipped":3326,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:08:17.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 22 00:08:17.447: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 22 00:08:17.509: INFO: Waiting for terminating namespaces to be deleted...
Aug 22 00:08:17.510: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 22 00:08:17.514: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.515: INFO: 	Container app ready: true, restart count 0
Aug 22 00:08:17.515: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.515: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 00:08:17.515: INFO: frontend-6c5f89d5d4-9fnd7 from kubectl-4665 started at 2020-08-22 00:08:06 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.515: INFO: 	Container guestbook-frontend ready: true, restart count 0
Aug 22 00:08:17.515: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.515: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:08:17.515: INFO: frontend-6c5f89d5d4-8np6s from kubectl-4665 started at 2020-08-22 00:08:06 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.515: INFO: 	Container guestbook-frontend ready: true, restart count 0
Aug 22 00:08:17.515: INFO: agnhost-slave-774cfc759f-6gv7t from kubectl-4665 started at 2020-08-22 00:08:06 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.515: INFO: 	Container slave ready: true, restart count 0
Aug 22 00:08:17.515: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 22 00:08:17.521: INFO: frontend-6c5f89d5d4-zqh8g from kubectl-4665 started at 2020-08-22 00:08:06 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.521: INFO: 	Container guestbook-frontend ready: true, restart count 0
Aug 22 00:08:17.521: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.521: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 00:08:17.521: INFO: agnhost-master-74c46fb7d4-dtr2r from kubectl-4665 started at 2020-08-22 00:08:06 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.521: INFO: 	Container master ready: true, restart count 0
Aug 22 00:08:17.521: INFO: agnhost-slave-774cfc759f-kdm5q from kubectl-4665 started at 2020-08-22 00:08:06 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.521: INFO: 	Container slave ready: true, restart count 0
Aug 22 00:08:17.521: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.521: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:08:17.521: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:08:17.521: INFO: 	Container app ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e0c38c12-2154-4d82-b54d-bda5d06fcd86 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-e0c38c12-2154-4d82-b54d-bda5d06fcd86 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e0c38c12-2154-4d82-b54d-bda5d06fcd86
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:08:35.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3183" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:18.523 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":195,"skipped":3334,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:08:35.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 00:08:36.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d9b9b4b-19b1-4614-8cb2-0d40d6f1755a" in namespace "projected-5122" to be "success or failure"
Aug 22 00:08:36.055: INFO: Pod "downwardapi-volume-5d9b9b4b-19b1-4614-8cb2-0d40d6f1755a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.909302ms
Aug 22 00:08:38.058: INFO: Pod "downwardapi-volume-5d9b9b4b-19b1-4614-8cb2-0d40d6f1755a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016835992s
Aug 22 00:08:40.062: INFO: Pod "downwardapi-volume-5d9b9b4b-19b1-4614-8cb2-0d40d6f1755a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020792693s
STEP: Saw pod success
Aug 22 00:08:40.062: INFO: Pod "downwardapi-volume-5d9b9b4b-19b1-4614-8cb2-0d40d6f1755a" satisfied condition "success or failure"
Aug 22 00:08:40.064: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5d9b9b4b-19b1-4614-8cb2-0d40d6f1755a container client-container: 
STEP: delete the pod
Aug 22 00:08:40.112: INFO: Waiting for pod downwardapi-volume-5d9b9b4b-19b1-4614-8cb2-0d40d6f1755a to disappear
Aug 22 00:08:40.145: INFO: Pod downwardapi-volume-5d9b9b4b-19b1-4614-8cb2-0d40d6f1755a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:08:40.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5122" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3334,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:08:40.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 22 00:08:46.315: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 22 00:08:56.420: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:08:56.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7154" for this suite.

• [SLOW TEST:16.278 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":197,"skipped":3365,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:08:56.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 22 00:08:56.527: INFO: Waiting up to 5m0s for pod "pod-2a6a8fb6-f7ce-4bfc-9638-32c245fb4211" in namespace "emptydir-9089" to be "success or failure"
Aug 22 00:08:56.530: INFO: Pod "pod-2a6a8fb6-f7ce-4bfc-9638-32c245fb4211": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322876ms
Aug 22 00:08:58.534: INFO: Pod "pod-2a6a8fb6-f7ce-4bfc-9638-32c245fb4211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00738194s
Aug 22 00:09:00.539: INFO: Pod "pod-2a6a8fb6-f7ce-4bfc-9638-32c245fb4211": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011973847s
STEP: Saw pod success
Aug 22 00:09:00.539: INFO: Pod "pod-2a6a8fb6-f7ce-4bfc-9638-32c245fb4211" satisfied condition "success or failure"
Aug 22 00:09:00.542: INFO: Trying to get logs from node jerma-worker2 pod pod-2a6a8fb6-f7ce-4bfc-9638-32c245fb4211 container test-container: 
STEP: delete the pod
Aug 22 00:09:00.562: INFO: Waiting for pod pod-2a6a8fb6-f7ce-4bfc-9638-32c245fb4211 to disappear
Aug 22 00:09:00.566: INFO: Pod pod-2a6a8fb6-f7ce-4bfc-9638-32c245fb4211 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:09:00.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9089" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3372,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:09:00.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:09:00.622: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 22 00:09:02.709: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:09:03.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1882" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":199,"skipped":3391,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:09:03.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Aug 22 00:09:04.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3544 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 22 00:09:04.395: INFO: stderr: ""
Aug 22 00:09:04.395: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Aug 22 00:09:04.395: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 22 00:09:04.396: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3544" to be "running and ready, or succeeded"
Aug 22 00:09:04.404: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116966ms
Aug 22 00:09:06.427: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03153926s
Aug 22 00:09:08.517: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.121159648s
Aug 22 00:09:08.517: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 22 00:09:08.517: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 22 00:09:08.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3544'
Aug 22 00:09:08.643: INFO: stderr: ""
Aug 22 00:09:08.643: INFO: stdout: "I0822 00:09:07.547351       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/nq7w 263\nI0822 00:09:07.747475       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wc6 393\nI0822 00:09:07.947528       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/mzx 565\nI0822 00:09:08.147523       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/j76n 276\nI0822 00:09:08.347513       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/wg6k 292\nI0822 00:09:08.547568       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/87p 414\n"
Aug 22 00:09:10.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3544'
Aug 22 00:09:11.006: INFO: stderr: ""
Aug 22 00:09:11.006: INFO: stdout: "I0822 00:09:07.547351       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/nq7w 263\nI0822 00:09:07.747475       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wc6 393\nI0822 00:09:07.947528       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/mzx 565\nI0822 00:09:08.147523       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/j76n 276\nI0822 00:09:08.347513       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/wg6k 292\nI0822 00:09:08.547568       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/87p 414\nI0822 00:09:08.747561       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/xmjt 409\nI0822 00:09:08.947524       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/tgtw 438\nI0822 00:09:09.147539       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/ww7r 321\nI0822 00:09:09.347518       1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/ls6 592\nI0822 00:09:09.547555       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/l5nq 272\nI0822 00:09:09.749344       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/gtd 467\nI0822 00:09:09.947549       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/68mh 237\nI0822 00:09:10.147531       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/5fr5 301\nI0822 00:09:10.347550       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/bl5t 272\nI0822 00:09:10.547509       1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/8sj 293\nI0822 00:09:10.747608       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/qzk 590\nI0822 00:09:10.947570       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/z28n 582\n"
STEP: limiting log lines
Aug 22 00:09:11.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3544 --tail=1'
Aug 22 00:09:11.111: INFO: stderr: ""
Aug 22 00:09:11.111: INFO: stdout: "I0822 00:09:10.947570       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/z28n 582\n"
Aug 22 00:09:11.111: INFO: got output "I0822 00:09:10.947570       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/z28n 582\n"
STEP: limiting log bytes
Aug 22 00:09:11.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3544 --limit-bytes=1'
Aug 22 00:09:11.221: INFO: stderr: ""
Aug 22 00:09:11.222: INFO: stdout: "I"
Aug 22 00:09:11.222: INFO: got output "I"
STEP: exposing timestamps
Aug 22 00:09:11.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3544 --tail=1 --timestamps'
Aug 22 00:09:11.345: INFO: stderr: ""
Aug 22 00:09:11.345: INFO: stdout: "2020-08-22T00:09:11.147615813Z I0822 00:09:11.147528       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/d8q 514\n"
Aug 22 00:09:11.345: INFO: got output "2020-08-22T00:09:11.147615813Z I0822 00:09:11.147528       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/d8q 514\n"
STEP: restricting to a time range
Aug 22 00:09:13.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3544 --since=1s'
Aug 22 00:09:14.245: INFO: stderr: ""
Aug 22 00:09:14.245: INFO: stdout: "I0822 00:09:13.347595       1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/z9h 296\nI0822 00:09:13.547508       1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/tcv4 379\nI0822 00:09:13.747564       1 logs_generator.go:76] 31 POST /api/v1/namespaces/kube-system/pods/jwj 541\nI0822 00:09:13.947506       1 logs_generator.go:76] 32 POST /api/v1/namespaces/ns/pods/7kjr 374\nI0822 00:09:14.147547       1 logs_generator.go:76] 33 GET /api/v1/namespaces/ns/pods/42q 451\n"
Aug 22 00:09:14.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3544 --since=24h'
Aug 22 00:09:14.369: INFO: stderr: ""
Aug 22 00:09:14.369: INFO: stdout: "I0822 00:09:07.547351       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/nq7w 263\nI0822 00:09:07.747475       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wc6 393\nI0822 00:09:07.947528       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/mzx 565\nI0822 00:09:08.147523       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/j76n 276\nI0822 00:09:08.347513       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/wg6k 292\nI0822 00:09:08.547568       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/87p 414\nI0822 00:09:08.747561       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/xmjt 409\nI0822 00:09:08.947524       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/tgtw 438\nI0822 00:09:09.147539       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/ww7r 321\nI0822 00:09:09.347518       1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/ls6 592\nI0822 00:09:09.547555       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/l5nq 272\nI0822 00:09:09.749344       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/gtd 467\nI0822 00:09:09.947549       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/68mh 237\nI0822 00:09:10.147531       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/5fr5 301\nI0822 00:09:10.347550       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/bl5t 272\nI0822 00:09:10.547509       1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/8sj 293\nI0822 00:09:10.747608       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/qzk 590\nI0822 00:09:10.947570       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/z28n 582\nI0822 00:09:11.147528       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/d8q 514\nI0822 00:09:11.347503       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/szq5 280\nI0822 00:09:11.547595       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/krpf 261\nI0822 00:09:11.747549       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/pcj7 401\nI0822 00:09:11.947506       1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/brsm 458\nI0822 00:09:12.147551       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/lzk 562\nI0822 00:09:12.347568       1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/vvlf 398\nI0822 00:09:12.547499       1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/hnrz 439\nI0822 00:09:12.747518       1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/s7vw 431\nI0822 00:09:12.947555       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/2pq 271\nI0822 00:09:13.147592       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/bn6 462\nI0822 00:09:13.347595       1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/z9h 296\nI0822 00:09:13.547508       1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/tcv4 379\nI0822 00:09:13.747564       1 logs_generator.go:76] 31 POST /api/v1/namespaces/kube-system/pods/jwj 541\nI0822 00:09:13.947506       1 logs_generator.go:76] 32 POST /api/v1/namespaces/ns/pods/7kjr 374\nI0822 00:09:14.147547       1 logs_generator.go:76] 33 GET /api/v1/namespaces/ns/pods/42q 451\nI0822 00:09:14.347544       1 logs_generator.go:76] 34 POST /api/v1/namespaces/default/pods/28s9 392\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 22 00:09:14.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3544'
Aug 22 00:09:21.770: INFO: stderr: ""
Aug 22 00:09:21.770: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:09:21.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3544" for this suite.

• [SLOW TEST:17.947 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":200,"skipped":3394,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:09:21.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug 22 00:09:21.828: INFO: >>> kubeConfig: /root/.kube/config
Aug 22 00:09:24.692: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:09:34.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6726" for this suite.

• [SLOW TEST:12.417 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":201,"skipped":3399,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:09:34.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:09:47.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2695" for this suite.

• [SLOW TEST:13.748 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":202,"skipped":3489,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:09:47.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-f2365ea3-55b6-4b6b-8707-28dd394acc0d
STEP: Creating a pod to test consume secrets
Aug 22 00:09:48.029: INFO: Waiting up to 5m0s for pod "pod-secrets-cca75a9e-6bff-459f-b918-ebf913a2782c" in namespace "secrets-5783" to be "success or failure"
Aug 22 00:09:48.033: INFO: Pod "pod-secrets-cca75a9e-6bff-459f-b918-ebf913a2782c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091019ms
Aug 22 00:09:50.062: INFO: Pod "pod-secrets-cca75a9e-6bff-459f-b918-ebf913a2782c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032835404s
Aug 22 00:09:52.066: INFO: Pod "pod-secrets-cca75a9e-6bff-459f-b918-ebf913a2782c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036966912s
STEP: Saw pod success
Aug 22 00:09:52.066: INFO: Pod "pod-secrets-cca75a9e-6bff-459f-b918-ebf913a2782c" satisfied condition "success or failure"
Aug 22 00:09:52.069: INFO: Trying to get logs from node jerma-worker pod pod-secrets-cca75a9e-6bff-459f-b918-ebf913a2782c container secret-volume-test: 
STEP: delete the pod
Aug 22 00:09:52.188: INFO: Waiting for pod pod-secrets-cca75a9e-6bff-459f-b918-ebf913a2782c to disappear
Aug 22 00:09:52.196: INFO: Pod pod-secrets-cca75a9e-6bff-459f-b918-ebf913a2782c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:09:52.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5783" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3493,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:09:52.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 22 00:09:52.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4185'
Aug 22 00:09:52.378: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 22 00:09:52.378: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Aug 22 00:09:52.386: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Aug 22 00:09:52.400: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug 22 00:09:52.428: INFO: scanned /root for discovery docs: 
Aug 22 00:09:52.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4185'
Aug 22 00:10:08.425: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 22 00:10:08.426: INFO: stdout: "Created e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb\nScaling up e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Aug 22 00:10:08.426: INFO: stdout: "Created e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb\nScaling up e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Aug 22 00:10:08.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4185'
Aug 22 00:10:08.563: INFO: stderr: ""
Aug 22 00:10:08.563: INFO: stdout: "e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb-cb282 "
Aug 22 00:10:08.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb-cb282 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4185'
Aug 22 00:10:08.665: INFO: stderr: ""
Aug 22 00:10:08.665: INFO: stdout: "true"
Aug 22 00:10:08.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb-cb282 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4185'
Aug 22 00:10:08.751: INFO: stderr: ""
Aug 22 00:10:08.751: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Aug 22 00:10:08.751: INFO: e2e-test-httpd-rc-364539c92b30e82dcd0562c9494ef9fb-cb282 is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Aug 22 00:10:08.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4185'
Aug 22 00:10:08.856: INFO: stderr: ""
Aug 22 00:10:08.856: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:10:08.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4185" for this suite.

• [SLOW TEST:16.718 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":204,"skipped":3502,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:10:08.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:11:09.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-290" for this suite.

• [SLOW TEST:60.101 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3508,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:11:09.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4465
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 22 00:11:09.099: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 22 00:11:33.259: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.26:8080/dial?request=hostname&protocol=udp&host=10.244.2.25&port=8081&tries=1'] Namespace:pod-network-test-4465 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 00:11:33.259: INFO: >>> kubeConfig: /root/.kube/config
I0822 00:11:33.300289       6 log.go:172] (0xc003b2a370) (0xc00190cbe0) Create stream
I0822 00:11:33.300318       6 log.go:172] (0xc003b2a370) (0xc00190cbe0) Stream added, broadcasting: 1
I0822 00:11:33.302181       6 log.go:172] (0xc003b2a370) Reply frame received for 1
I0822 00:11:33.302225       6 log.go:172] (0xc003b2a370) (0xc0029d0140) Create stream
I0822 00:11:33.302242       6 log.go:172] (0xc003b2a370) (0xc0029d0140) Stream added, broadcasting: 3
I0822 00:11:33.303147       6 log.go:172] (0xc003b2a370) Reply frame received for 3
I0822 00:11:33.303177       6 log.go:172] (0xc003b2a370) (0xc00190cf00) Create stream
I0822 00:11:33.303187       6 log.go:172] (0xc003b2a370) (0xc00190cf00) Stream added, broadcasting: 5
I0822 00:11:33.304001       6 log.go:172] (0xc003b2a370) Reply frame received for 5
I0822 00:11:33.373974       6 log.go:172] (0xc003b2a370) Data frame received for 3
I0822 00:11:33.373996       6 log.go:172] (0xc0029d0140) (3) Data frame handling
I0822 00:11:33.374012       6 log.go:172] (0xc0029d0140) (3) Data frame sent
I0822 00:11:33.374891       6 log.go:172] (0xc003b2a370) Data frame received for 3
I0822 00:11:33.374923       6 log.go:172] (0xc0029d0140) (3) Data frame handling
I0822 00:11:33.375030       6 log.go:172] (0xc003b2a370) Data frame received for 5
I0822 00:11:33.375053       6 log.go:172] (0xc00190cf00) (5) Data frame handling
I0822 00:11:33.376457       6 log.go:172] (0xc003b2a370) Data frame received for 1
I0822 00:11:33.376477       6 log.go:172] (0xc00190cbe0) (1) Data frame handling
I0822 00:11:33.376493       6 log.go:172] (0xc00190cbe0) (1) Data frame sent
I0822 00:11:33.376512       6 log.go:172] (0xc003b2a370) (0xc00190cbe0) Stream removed, broadcasting: 1
I0822 00:11:33.376533       6 log.go:172] (0xc003b2a370) Go away received
I0822 00:11:33.376927       6 log.go:172] (0xc003b2a370) (0xc00190cbe0) Stream removed, broadcasting: 1
I0822 00:11:33.376962       6 log.go:172] (0xc003b2a370) (0xc0029d0140) Stream removed, broadcasting: 3
I0822 00:11:33.376983       6 log.go:172] (0xc003b2a370) (0xc00190cf00) Stream removed, broadcasting: 5
Aug 22 00:11:33.377: INFO: Waiting for responses: map[]
Aug 22 00:11:33.380: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.26:8080/dial?request=hostname&protocol=udp&host=10.244.1.9&port=8081&tries=1'] Namespace:pod-network-test-4465 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 00:11:33.380: INFO: >>> kubeConfig: /root/.kube/config
I0822 00:11:33.413931       6 log.go:172] (0xc003b2aa50) (0xc00190d720) Create stream
I0822 00:11:33.413958       6 log.go:172] (0xc003b2aa50) (0xc00190d720) Stream added, broadcasting: 1
I0822 00:11:33.415984       6 log.go:172] (0xc003b2aa50) Reply frame received for 1
I0822 00:11:33.416034       6 log.go:172] (0xc003b2aa50) (0xc0028b8c80) Create stream
I0822 00:11:33.416049       6 log.go:172] (0xc003b2aa50) (0xc0028b8c80) Stream added, broadcasting: 3
I0822 00:11:33.417163       6 log.go:172] (0xc003b2aa50) Reply frame received for 3
I0822 00:11:33.417188       6 log.go:172] (0xc003b2aa50) (0xc000f04e60) Create stream
I0822 00:11:33.417197       6 log.go:172] (0xc003b2aa50) (0xc000f04e60) Stream added, broadcasting: 5
I0822 00:11:33.418278       6 log.go:172] (0xc003b2aa50) Reply frame received for 5
I0822 00:11:33.486065       6 log.go:172] (0xc003b2aa50) Data frame received for 3
I0822 00:11:33.486097       6 log.go:172] (0xc0028b8c80) (3) Data frame handling
I0822 00:11:33.486116       6 log.go:172] (0xc0028b8c80) (3) Data frame sent
I0822 00:11:33.486481       6 log.go:172] (0xc003b2aa50) Data frame received for 3
I0822 00:11:33.486525       6 log.go:172] (0xc0028b8c80) (3) Data frame handling
I0822 00:11:33.486679       6 log.go:172] (0xc003b2aa50) Data frame received for 5
I0822 00:11:33.486710       6 log.go:172] (0xc000f04e60) (5) Data frame handling
I0822 00:11:33.488479       6 log.go:172] (0xc003b2aa50) Data frame received for 1
I0822 00:11:33.488506       6 log.go:172] (0xc00190d720) (1) Data frame handling
I0822 00:11:33.488530       6 log.go:172] (0xc00190d720) (1) Data frame sent
I0822 00:11:33.488553       6 log.go:172] (0xc003b2aa50) (0xc00190d720) Stream removed, broadcasting: 1
I0822 00:11:33.488570       6 log.go:172] (0xc003b2aa50) Go away received
I0822 00:11:33.488977       6 log.go:172] (0xc003b2aa50) (0xc00190d720) Stream removed, broadcasting: 1
I0822 00:11:33.489005       6 log.go:172] (0xc003b2aa50) (0xc0028b8c80) Stream removed, broadcasting: 3
I0822 00:11:33.489013       6 log.go:172] (0xc003b2aa50) (0xc000f04e60) Stream removed, broadcasting: 5
Aug 22 00:11:33.489: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:11:33.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4465" for this suite.

• [SLOW TEST:24.474 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:11:33.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-26de1e7a-4f71-417d-b1ae-70d1d1affb3a
STEP: Creating a pod to test consume secrets
Aug 22 00:11:33.642: INFO: Waiting up to 5m0s for pod "pod-secrets-3651345b-6e4e-4536-88ea-15d4a0b47afa" in namespace "secrets-1794" to be "success or failure"
Aug 22 00:11:33.652: INFO: Pod "pod-secrets-3651345b-6e4e-4536-88ea-15d4a0b47afa": Phase="Pending", Reason="", readiness=false. Elapsed: 9.700662ms
Aug 22 00:11:35.656: INFO: Pod "pod-secrets-3651345b-6e4e-4536-88ea-15d4a0b47afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01356541s
Aug 22 00:11:37.661: INFO: Pod "pod-secrets-3651345b-6e4e-4536-88ea-15d4a0b47afa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018238122s
STEP: Saw pod success
Aug 22 00:11:37.661: INFO: Pod "pod-secrets-3651345b-6e4e-4536-88ea-15d4a0b47afa" satisfied condition "success or failure"
Aug 22 00:11:37.664: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-3651345b-6e4e-4536-88ea-15d4a0b47afa container secret-volume-test: 
STEP: delete the pod
Aug 22 00:11:37.708: INFO: Waiting for pod pod-secrets-3651345b-6e4e-4536-88ea-15d4a0b47afa to disappear
Aug 22 00:11:37.733: INFO: Pod pod-secrets-3651345b-6e4e-4536-88ea-15d4a0b47afa no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:11:37.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1794" for this suite.
STEP: Destroying namespace "secret-namespace-220" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3542,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:11:37.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 22 00:11:37.798: INFO: Waiting up to 5m0s for pod "pod-91ac01c1-e1f2-499c-8d4b-e8eb6cbd9fb3" in namespace "emptydir-881" to be "success or failure"
Aug 22 00:11:37.808: INFO: Pod "pod-91ac01c1-e1f2-499c-8d4b-e8eb6cbd9fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.807123ms
Aug 22 00:11:39.877: INFO: Pod "pod-91ac01c1-e1f2-499c-8d4b-e8eb6cbd9fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078959171s
Aug 22 00:11:41.908: INFO: Pod "pod-91ac01c1-e1f2-499c-8d4b-e8eb6cbd9fb3": Phase="Running", Reason="", readiness=true. Elapsed: 4.109120342s
Aug 22 00:11:43.912: INFO: Pod "pod-91ac01c1-e1f2-499c-8d4b-e8eb6cbd9fb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113045895s
STEP: Saw pod success
Aug 22 00:11:43.912: INFO: Pod "pod-91ac01c1-e1f2-499c-8d4b-e8eb6cbd9fb3" satisfied condition "success or failure"
Aug 22 00:11:43.915: INFO: Trying to get logs from node jerma-worker pod pod-91ac01c1-e1f2-499c-8d4b-e8eb6cbd9fb3 container test-container: 
STEP: delete the pod
Aug 22 00:11:43.973: INFO: Waiting for pod pod-91ac01c1-e1f2-499c-8d4b-e8eb6cbd9fb3 to disappear
Aug 22 00:11:43.983: INFO: Pod pod-91ac01c1-e1f2-499c-8d4b-e8eb6cbd9fb3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:11:43.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-881" for this suite.

• [SLOW TEST:6.239 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3574,"failed":0}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:11:43.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 22 00:11:44.065: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 22 00:11:44.076: INFO: Waiting for terminating namespaces to be deleted...
Aug 22 00:11:44.078: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 22 00:11:44.083: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:11:44.083: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 00:11:44.083: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:11:44.083: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:11:44.083: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:11:44.083: INFO: 	Container app ready: true, restart count 0
Aug 22 00:11:44.083: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 22 00:11:44.088: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:11:44.088: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:11:44.088: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:11:44.088: INFO: 	Container app ready: true, restart count 0
Aug 22 00:11:44.088: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:11:44.088: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-03b84ccb-1e3b-4db0-abaf-064f6e6f378f 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-03b84ccb-1e3b-4db0-abaf-064f6e6f378f off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-03b84ccb-1e3b-4db0-abaf-064f6e6f378f
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:16:52.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-200" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:308.268 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":209,"skipped":3584,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:16:52.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:16:52.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:16:56.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5047" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3586,"failed":0}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:16:56.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-ecc13d9e-27c9-4b0b-9ae0-1b312d8e945c
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:16:56.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1044" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":211,"skipped":3591,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:16:56.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-a81e017e-3b02-4c4b-bc7d-4a95e4502ebd
STEP: Creating a pod to test consume configMaps
Aug 22 00:16:56.780: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-28c03f79-b0c4-43a4-b1c3-0ddfe3710269" in namespace "projected-6308" to be "success or failure"
Aug 22 00:16:56.800: INFO: Pod "pod-projected-configmaps-28c03f79-b0c4-43a4-b1c3-0ddfe3710269": Phase="Pending", Reason="", readiness=false. Elapsed: 20.229498ms
Aug 22 00:16:58.881: INFO: Pod "pod-projected-configmaps-28c03f79-b0c4-43a4-b1c3-0ddfe3710269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101041741s
Aug 22 00:17:00.886: INFO: Pod "pod-projected-configmaps-28c03f79-b0c4-43a4-b1c3-0ddfe3710269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105754056s
STEP: Saw pod success
Aug 22 00:17:00.886: INFO: Pod "pod-projected-configmaps-28c03f79-b0c4-43a4-b1c3-0ddfe3710269" satisfied condition "success or failure"
Aug 22 00:17:00.889: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-28c03f79-b0c4-43a4-b1c3-0ddfe3710269 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 22 00:17:01.114: INFO: Waiting for pod pod-projected-configmaps-28c03f79-b0c4-43a4-b1c3-0ddfe3710269 to disappear
Aug 22 00:17:01.210: INFO: Pod pod-projected-configmaps-28c03f79-b0c4-43a4-b1c3-0ddfe3710269 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:17:01.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6308" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3600,"failed":0}
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:17:01.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:17:01.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2197" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3606,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:17:01.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 22 00:17:01.439: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 22 00:17:01.449: INFO: Waiting for terminating namespaces to be deleted...
Aug 22 00:17:01.451: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 22 00:17:01.468: INFO: pod-exec-websocket-746b1df2-af84-48a4-b840-0477a6acdc2d from pods-5047 started at 2020-08-22 00:16:52 +0000 UTC (1 container statuses recorded)
Aug 22 00:17:01.468: INFO: 	Container main ready: true, restart count 0
Aug 22 00:17:01.468: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:17:01.468: INFO: 	Container app ready: true, restart count 0
Aug 22 00:17:01.468: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:17:01.468: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 22 00:17:01.468: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:17:01.468: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:17:01.468: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 22 00:17:01.472: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 22 00:17:01.472: INFO: 	Container app ready: true, restart count 0
Aug 22 00:17:01.472: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:17:01.472: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 22 00:17:01.472: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 22 00:17:01.472: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162d6ef06284e5ef], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:17:02.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2837" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":214,"skipped":3639,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:17:02.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:17:02.903: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:17:04.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652222, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652222, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652222, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652222, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:17:07.944: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:17:08.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9473" for this suite.
STEP: Destroying namespace "webhook-9473-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.646 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":215,"skipped":3648,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:17:08.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 22 00:17:08.296: INFO: Waiting up to 5m0s for pod "pod-909dcba4-5d35-4d4d-b65b-ad680b6f01d1" in namespace "emptydir-672" to be "success or failure"
Aug 22 00:17:08.299: INFO: Pod "pod-909dcba4-5d35-4d4d-b65b-ad680b6f01d1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.213472ms
Aug 22 00:17:10.306: INFO: Pod "pod-909dcba4-5d35-4d4d-b65b-ad680b6f01d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01032073s
Aug 22 00:17:12.310: INFO: Pod "pod-909dcba4-5d35-4d4d-b65b-ad680b6f01d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014382956s
STEP: Saw pod success
Aug 22 00:17:12.310: INFO: Pod "pod-909dcba4-5d35-4d4d-b65b-ad680b6f01d1" satisfied condition "success or failure"
Aug 22 00:17:12.313: INFO: Trying to get logs from node jerma-worker2 pod pod-909dcba4-5d35-4d4d-b65b-ad680b6f01d1 container test-container: 
STEP: delete the pod
Aug 22 00:17:12.355: INFO: Waiting for pod pod-909dcba4-5d35-4d4d-b65b-ad680b6f01d1 to disappear
Aug 22 00:17:12.383: INFO: Pod pod-909dcba4-5d35-4d4d-b65b-ad680b6f01d1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:17:12.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-672" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3663,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:17:12.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:17:12.438: INFO: Creating deployment "test-recreate-deployment"
Aug 22 00:17:12.462: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 22 00:17:12.515: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 22 00:17:14.522: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 22 00:17:14.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652232, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652232, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652232, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652232, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:17:16.529: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 22 00:17:16.536: INFO: Updating deployment test-recreate-deployment
Aug 22 00:17:16.536: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 22 00:17:16.771: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-2947 /apis/apps/v1/namespaces/deployment-2947/deployments/test-recreate-deployment 956aa8eb-883d-4d52-827e-d40009799728 2296431 2 2020-08-22 00:17:12 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044eac28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-22 00:17:16 +0000 UTC,LastTransitionTime:2020-08-22 00:17:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-22 00:17:16 +0000 UTC,LastTransitionTime:2020-08-22 00:17:12 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 22 00:17:16.993: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-2947 /apis/apps/v1/namespaces/deployment-2947/replicasets/test-recreate-deployment-5f94c574ff 85649b6e-4795-4744-b7e7-5dd036b176a0 2296428 1 2020-08-22 00:17:16 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 956aa8eb-883d-4d52-827e-d40009799728 0xc0044eafc7 0xc0044eafc8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044eb028  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 00:17:16.993: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 22 00:17:16.993: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-2947 /apis/apps/v1/namespaces/deployment-2947/replicasets/test-recreate-deployment-799c574856 9dbfa9b3-389c-4de8-977b-68c860f73860 2296420 2 2020-08-22 00:17:12 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 956aa8eb-883d-4d52-827e-d40009799728 0xc0044eb097 0xc0044eb098}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044eb108  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 00:17:17.003: INFO: Pod "test-recreate-deployment-5f94c574ff-vwj6j" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-vwj6j test-recreate-deployment-5f94c574ff- deployment-2947 /api/v1/namespaces/deployment-2947/pods/test-recreate-deployment-5f94c574ff-vwj6j d50c2ee2-22b7-48c5-ab58-1cc1e17fe342 2296433 0 2020-08-22 00:17:16 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 85649b6e-4795-4744-b7e7-5dd036b176a0 0xc0044eb577 0xc0044eb578}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kdjr9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kdjr9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kdjr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:17:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:17:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:17:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:17:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-22 00:17:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:17:17.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2947" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":217,"skipped":3691,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:17:17.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 00:17:17.179: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40929c6c-f31e-43c5-b9cc-a302a3ab9211" in namespace "projected-9714" to be "success or failure"
Aug 22 00:17:17.182: INFO: Pod "downwardapi-volume-40929c6c-f31e-43c5-b9cc-a302a3ab9211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.824784ms
Aug 22 00:17:19.186: INFO: Pod "downwardapi-volume-40929c6c-f31e-43c5-b9cc-a302a3ab9211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006683853s
Aug 22 00:17:21.190: INFO: Pod "downwardapi-volume-40929c6c-f31e-43c5-b9cc-a302a3ab9211": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01029065s
STEP: Saw pod success
Aug 22 00:17:21.190: INFO: Pod "downwardapi-volume-40929c6c-f31e-43c5-b9cc-a302a3ab9211" satisfied condition "success or failure"
Aug 22 00:17:21.192: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-40929c6c-f31e-43c5-b9cc-a302a3ab9211 container client-container: 
STEP: delete the pod
Aug 22 00:17:21.251: INFO: Waiting for pod downwardapi-volume-40929c6c-f31e-43c5-b9cc-a302a3ab9211 to disappear
Aug 22 00:17:21.260: INFO: Pod downwardapi-volume-40929c6c-f31e-43c5-b9cc-a302a3ab9211 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:17:21.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9714" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3692,"failed":0}

------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:17:21.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:17:21.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6898" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":219,"skipped":3692,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:17:21.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-644
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-644 to expose endpoints map[]
Aug 22 00:17:21.506: INFO: Get endpoints failed (15.653783ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug 22 00:17:22.593: INFO: successfully validated that service multi-endpoint-test in namespace services-644 exposes endpoints map[] (1.103547377s elapsed)
STEP: Creating pod pod1 in namespace services-644
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-644 to expose endpoints map[pod1:[100]]
Aug 22 00:17:27.124: INFO: successfully validated that service multi-endpoint-test in namespace services-644 exposes endpoints map[pod1:[100]] (4.522256559s elapsed)
STEP: Creating pod pod2 in namespace services-644
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-644 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 22 00:17:31.232: INFO: successfully validated that service multi-endpoint-test in namespace services-644 exposes endpoints map[pod1:[100] pod2:[101]] (4.103360433s elapsed)
STEP: Deleting pod pod1 in namespace services-644
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-644 to expose endpoints map[pod2:[101]]
Aug 22 00:17:32.270: INFO: successfully validated that service multi-endpoint-test in namespace services-644 exposes endpoints map[pod2:[101]] (1.033771366s elapsed)
STEP: Deleting pod pod2 in namespace services-644
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-644 to expose endpoints map[]
Aug 22 00:17:33.335: INFO: successfully validated that service multi-endpoint-test in namespace services-644 exposes endpoints map[] (1.043285656s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:17:33.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-644" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.020 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":220,"skipped":3720,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:17:33.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:17:33.509: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 22 00:17:33.525: INFO: Number of nodes with available pods: 0
Aug 22 00:17:33.525: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 22 00:17:33.561: INFO: Number of nodes with available pods: 0
Aug 22 00:17:33.561: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:34.565: INFO: Number of nodes with available pods: 0
Aug 22 00:17:34.566: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:35.566: INFO: Number of nodes with available pods: 0
Aug 22 00:17:35.566: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:36.566: INFO: Number of nodes with available pods: 0
Aug 22 00:17:36.566: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:37.565: INFO: Number of nodes with available pods: 1
Aug 22 00:17:37.565: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 22 00:17:37.596: INFO: Number of nodes with available pods: 1
Aug 22 00:17:37.596: INFO: Number of running nodes: 0, number of available pods: 1
Aug 22 00:17:38.599: INFO: Number of nodes with available pods: 0
Aug 22 00:17:38.600: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 22 00:17:38.637: INFO: Number of nodes with available pods: 0
Aug 22 00:17:38.637: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:39.641: INFO: Number of nodes with available pods: 0
Aug 22 00:17:39.641: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:40.641: INFO: Number of nodes with available pods: 0
Aug 22 00:17:40.641: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:41.644: INFO: Number of nodes with available pods: 0
Aug 22 00:17:41.644: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:42.641: INFO: Number of nodes with available pods: 0
Aug 22 00:17:42.641: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:43.641: INFO: Number of nodes with available pods: 0
Aug 22 00:17:43.641: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:44.714: INFO: Number of nodes with available pods: 0
Aug 22 00:17:44.714: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:45.640: INFO: Number of nodes with available pods: 0
Aug 22 00:17:45.640: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 22 00:17:46.640: INFO: Number of nodes with available pods: 1
Aug 22 00:17:46.640: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4026, will wait for the garbage collector to delete the pods
Aug 22 00:17:46.705: INFO: Deleting DaemonSet.extensions daemon-set took: 6.593154ms
Aug 22 00:17:47.005: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.263779ms
Aug 22 00:18:01.808: INFO: Number of nodes with available pods: 0
Aug 22 00:18:01.809: INFO: Number of running nodes: 0, number of available pods: 0
Aug 22 00:18:01.811: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4026/daemonsets","resourceVersion":"2296755"},"items":null}

Aug 22 00:18:01.814: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4026/pods","resourceVersion":"2296755"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:18:01.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4026" for this suite.

• [SLOW TEST:28.521 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":221,"skipped":3728,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:18:01.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-12ae1cb0-9c9d-4067-a171-b25e37c0b82f
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-12ae1cb0-9c9d-4067-a171-b25e37c0b82f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:18:08.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8651" for this suite.

• [SLOW TEST:6.127 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3748,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:18:08.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-a52f1ad7-b9eb-4131-8e48-436faa7e1d56
STEP: Creating a pod to test consume secrets
Aug 22 00:18:08.106: INFO: Waiting up to 5m0s for pod "pod-secrets-ac20d6a1-4901-41b1-975c-38f262af9617" in namespace "secrets-7364" to be "success or failure"
Aug 22 00:18:08.110: INFO: Pod "pod-secrets-ac20d6a1-4901-41b1-975c-38f262af9617": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132776ms
Aug 22 00:18:10.114: INFO: Pod "pod-secrets-ac20d6a1-4901-41b1-975c-38f262af9617": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008568309s
Aug 22 00:18:12.118: INFO: Pod "pod-secrets-ac20d6a1-4901-41b1-975c-38f262af9617": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012425207s
STEP: Saw pod success
Aug 22 00:18:12.118: INFO: Pod "pod-secrets-ac20d6a1-4901-41b1-975c-38f262af9617" satisfied condition "success or failure"
Aug 22 00:18:12.121: INFO: Trying to get logs from node jerma-worker pod pod-secrets-ac20d6a1-4901-41b1-975c-38f262af9617 container secret-volume-test: 
STEP: delete the pod
Aug 22 00:18:12.137: INFO: Waiting for pod pod-secrets-ac20d6a1-4901-41b1-975c-38f262af9617 to disappear
Aug 22 00:18:12.153: INFO: Pod pod-secrets-ac20d6a1-4901-41b1-975c-38f262af9617 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:18:12.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7364" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3783,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:18:12.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:18:12.837: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:18:15.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652292, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652292, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652292, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652292, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:18:18.033: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:18:18.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4121-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:18:19.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4411" for this suite.
STEP: Destroying namespace "webhook-4411-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.004 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":224,"skipped":3804,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:18:19.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 22 00:18:23.660: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:18:23.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1086" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3820,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:18:23.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:18:24.534: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:18:26.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652304, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652304, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652304, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652304, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:18:28.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652304, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652304, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652304, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652304, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:18:31.570: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:18:43.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2386" for this suite.
STEP: Destroying namespace "webhook-2386-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.195 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":226,"skipped":3845,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:18:43.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-088d5ebb-c4be-43a2-8c1e-3f191a748112
STEP: Creating a pod to test consume configMaps
Aug 22 00:18:44.523: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2fa64cca-ffa1-4d23-84b9-a8c024a4a2a3" in namespace "projected-6051" to be "success or failure"
Aug 22 00:18:44.553: INFO: Pod "pod-projected-configmaps-2fa64cca-ffa1-4d23-84b9-a8c024a4a2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.400327ms
Aug 22 00:18:46.563: INFO: Pod "pod-projected-configmaps-2fa64cca-ffa1-4d23-84b9-a8c024a4a2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040061504s
Aug 22 00:18:48.567: INFO: Pod "pod-projected-configmaps-2fa64cca-ffa1-4d23-84b9-a8c024a4a2a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043877585s
STEP: Saw pod success
Aug 22 00:18:48.567: INFO: Pod "pod-projected-configmaps-2fa64cca-ffa1-4d23-84b9-a8c024a4a2a3" satisfied condition "success or failure"
Aug 22 00:18:48.570: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-2fa64cca-ffa1-4d23-84b9-a8c024a4a2a3 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 22 00:18:48.642: INFO: Waiting for pod pod-projected-configmaps-2fa64cca-ffa1-4d23-84b9-a8c024a4a2a3 to disappear
Aug 22 00:18:48.644: INFO: Pod pod-projected-configmaps-2fa64cca-ffa1-4d23-84b9-a8c024a4a2a3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:18:48.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6051" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3858,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:18:48.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 22 00:18:48.724: INFO: >>> kubeConfig: /root/.kube/config
Aug 22 00:18:50.689: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:19:01.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5234" for this suite.

• [SLOW TEST:12.466 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":228,"skipped":3864,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:19:01.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 22 00:19:01.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:19:16.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7385" for this suite.

• [SLOW TEST:15.270 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":229,"skipped":3879,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:19:16.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:19:16.493: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:19:24.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-137" for this suite.

• [SLOW TEST:8.103 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3940,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:19:24.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-d33d17ea-64a6-4180-8799-70b0abcdcffe
STEP: Creating a pod to test consume secrets
Aug 22 00:19:26.217: INFO: Waiting up to 5m0s for pod "pod-secrets-d0e8e91a-cb94-41e3-b446-96f1cb97b0a4" in namespace "secrets-9947" to be "success or failure"
Aug 22 00:19:26.244: INFO: Pod "pod-secrets-d0e8e91a-cb94-41e3-b446-96f1cb97b0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.533837ms
Aug 22 00:19:28.289: INFO: Pod "pod-secrets-d0e8e91a-cb94-41e3-b446-96f1cb97b0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07176652s
Aug 22 00:19:30.343: INFO: Pod "pod-secrets-d0e8e91a-cb94-41e3-b446-96f1cb97b0a4": Phase="Running", Reason="", readiness=true. Elapsed: 4.125565383s
Aug 22 00:19:32.346: INFO: Pod "pod-secrets-d0e8e91a-cb94-41e3-b446-96f1cb97b0a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129250374s
STEP: Saw pod success
Aug 22 00:19:32.347: INFO: Pod "pod-secrets-d0e8e91a-cb94-41e3-b446-96f1cb97b0a4" satisfied condition "success or failure"
Aug 22 00:19:32.349: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d0e8e91a-cb94-41e3-b446-96f1cb97b0a4 container secret-volume-test: 
STEP: delete the pod
Aug 22 00:19:32.369: INFO: Waiting for pod pod-secrets-d0e8e91a-cb94-41e3-b446-96f1cb97b0a4 to disappear
Aug 22 00:19:32.371: INFO: Pod pod-secrets-d0e8e91a-cb94-41e3-b446-96f1cb97b0a4 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:19:32.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9947" for this suite.

• [SLOW TEST:7.782 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3949,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:19:32.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 22 00:19:32.515: INFO: Waiting up to 5m0s for pod "downward-api-481d4feb-5618-4027-9f8b-fa39155c31d3" in namespace "downward-api-5556" to be "success or failure"
Aug 22 00:19:32.528: INFO: Pod "downward-api-481d4feb-5618-4027-9f8b-fa39155c31d3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.391622ms
Aug 22 00:19:34.858: INFO: Pod "downward-api-481d4feb-5618-4027-9f8b-fa39155c31d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342769642s
Aug 22 00:19:36.862: INFO: Pod "downward-api-481d4feb-5618-4027-9f8b-fa39155c31d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.346582893s
STEP: Saw pod success
Aug 22 00:19:36.862: INFO: Pod "downward-api-481d4feb-5618-4027-9f8b-fa39155c31d3" satisfied condition "success or failure"
Aug 22 00:19:36.864: INFO: Trying to get logs from node jerma-worker pod downward-api-481d4feb-5618-4027-9f8b-fa39155c31d3 container dapi-container: 
STEP: delete the pod
Aug 22 00:19:36.913: INFO: Waiting for pod downward-api-481d4feb-5618-4027-9f8b-fa39155c31d3 to disappear
Aug 22 00:19:36.923: INFO: Pod downward-api-481d4feb-5618-4027-9f8b-fa39155c31d3 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:19:36.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5556" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3959,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:19:36.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Aug 22 00:19:36.991: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix798082616/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:19:37.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4384" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":234,"skipped":3976,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:19:37.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:19:53.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2392" for this suite.

• [SLOW TEST:16.700 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":235,"skipped":3986,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:19:53.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9868.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9868.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 00:19:59.909: INFO: DNS probes using dns-9868/dns-test-eaddc522-23f7-46ec-bcfb-9f73e6b0163f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:19:59.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9868" for this suite.

• [SLOW TEST:6.206 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":236,"skipped":4020,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:19:59.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 00:20:00.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-caf7a101-f1e6-4aac-b339-a488da8f00af" in namespace "downward-api-7169" to be "success or failure"
Aug 22 00:20:00.073: INFO: Pod "downwardapi-volume-caf7a101-f1e6-4aac-b339-a488da8f00af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.683091ms
Aug 22 00:20:02.077: INFO: Pod "downwardapi-volume-caf7a101-f1e6-4aac-b339-a488da8f00af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007224098s
Aug 22 00:20:04.080: INFO: Pod "downwardapi-volume-caf7a101-f1e6-4aac-b339-a488da8f00af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010717912s
STEP: Saw pod success
Aug 22 00:20:04.080: INFO: Pod "downwardapi-volume-caf7a101-f1e6-4aac-b339-a488da8f00af" satisfied condition "success or failure"
Aug 22 00:20:04.083: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-caf7a101-f1e6-4aac-b339-a488da8f00af container client-container: 
STEP: delete the pod
Aug 22 00:20:04.297: INFO: Waiting for pod downwardapi-volume-caf7a101-f1e6-4aac-b339-a488da8f00af to disappear
Aug 22 00:20:04.313: INFO: Pod downwardapi-volume-caf7a101-f1e6-4aac-b339-a488da8f00af no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:20:04.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7169" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":4027,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:20:04.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 22 00:20:04.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3324'
Aug 22 00:20:08.762: INFO: stderr: ""
Aug 22 00:20:08.762: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 22 00:20:13.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3324 -o json'
Aug 22 00:20:13.921: INFO: stderr: ""
Aug 22 00:20:13.921: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-22T00:20:08Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-3324\",\n        \"resourceVersion\": \"2297659\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3324/pods/e2e-test-httpd-pod\",\n        \"uid\": \"f98cecc8-8e43-4546-8a30-955307e06b1f\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-66klt\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-66klt\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-66klt\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-22T00:20:08Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-22T00:20:12Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-22T00:20:12Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-22T00:20:08Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://1d295be68fcbae6b87debef28762bf70fea34994957ec299f86e68a00b7e43bc\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-22T00:20:11Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.3\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.27\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.27\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-22T00:20:08Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 22 00:20:13.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3324'
Aug 22 00:20:14.225: INFO: stderr: ""
Aug 22 00:20:14.225: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Aug 22 00:20:14.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3324'
Aug 22 00:20:21.750: INFO: stderr: ""
Aug 22 00:20:21.750: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:20:21.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3324" for this suite.

• [SLOW TEST:17.435 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":238,"skipped":4036,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:20:21.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-d450650d-02da-4a5f-a765-5478d490e31f
STEP: Creating a pod to test consume secrets
Aug 22 00:20:21.876: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e52b11be-8dc4-436d-ace3-df39b5aae31a" in namespace "projected-9168" to be "success or failure"
Aug 22 00:20:21.895: INFO: Pod "pod-projected-secrets-e52b11be-8dc4-436d-ace3-df39b5aae31a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.625318ms
Aug 22 00:20:23.924: INFO: Pod "pod-projected-secrets-e52b11be-8dc4-436d-ace3-df39b5aae31a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048559698s
Aug 22 00:20:25.928: INFO: Pod "pod-projected-secrets-e52b11be-8dc4-436d-ace3-df39b5aae31a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052326037s
Aug 22 00:20:27.932: INFO: Pod "pod-projected-secrets-e52b11be-8dc4-436d-ace3-df39b5aae31a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056906902s
STEP: Saw pod success
Aug 22 00:20:27.933: INFO: Pod "pod-projected-secrets-e52b11be-8dc4-436d-ace3-df39b5aae31a" satisfied condition "success or failure"
Aug 22 00:20:27.936: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e52b11be-8dc4-436d-ace3-df39b5aae31a container secret-volume-test: 
STEP: delete the pod
Aug 22 00:20:27.967: INFO: Waiting for pod pod-projected-secrets-e52b11be-8dc4-436d-ace3-df39b5aae31a to disappear
Aug 22 00:20:27.978: INFO: Pod pod-projected-secrets-e52b11be-8dc4-436d-ace3-df39b5aae31a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:20:27.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9168" for this suite.

• [SLOW TEST:6.236 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":4045,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:20:27.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:20:28.427: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:20:30.436: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652428, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652428, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652428, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652428, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:20:32.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652428, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652428, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652428, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652428, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:20:35.465: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:20:35.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:20:36.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7335" for this suite.
STEP: Destroying namespace "webhook-7335-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.126 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":240,"skipped":4047,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:20:37.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:20:39.141: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:20:41.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652439, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652439, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652439, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652438, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:20:43.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652439, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652439, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652439, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652438, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:20:45.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652439, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652439, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652439, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652438, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:20:48.182: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:20:49.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1303" for this suite.
STEP: Destroying namespace "webhook-1303-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.763 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":241,"skipped":4048,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:20:49.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Aug 22 00:20:50.388: INFO: Waiting up to 5m0s for pod "client-containers-e78b34cc-983a-488d-b2f3-4cf767e9aaf1" in namespace "containers-8780" to be "success or failure"
Aug 22 00:20:50.626: INFO: Pod "client-containers-e78b34cc-983a-488d-b2f3-4cf767e9aaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 237.536738ms
Aug 22 00:20:52.630: INFO: Pod "client-containers-e78b34cc-983a-488d-b2f3-4cf767e9aaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241787944s
Aug 22 00:20:54.634: INFO: Pod "client-containers-e78b34cc-983a-488d-b2f3-4cf767e9aaf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.245933458s
STEP: Saw pod success
Aug 22 00:20:54.634: INFO: Pod "client-containers-e78b34cc-983a-488d-b2f3-4cf767e9aaf1" satisfied condition "success or failure"
Aug 22 00:20:54.637: INFO: Trying to get logs from node jerma-worker pod client-containers-e78b34cc-983a-488d-b2f3-4cf767e9aaf1 container test-container: 
STEP: delete the pod
Aug 22 00:20:54.680: INFO: Waiting for pod client-containers-e78b34cc-983a-488d-b2f3-4cf767e9aaf1 to disappear
Aug 22 00:20:54.717: INFO: Pod client-containers-e78b34cc-983a-488d-b2f3-4cf767e9aaf1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:20:54.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8780" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4067,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:20:54.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:21:26.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3818" for this suite.
STEP: Destroying namespace "nsdeletetest-209" for this suite.
Aug 22 00:21:26.139: INFO: Namespace nsdeletetest-209 was already deleted
STEP: Destroying namespace "nsdeletetest-220" for this suite.

• [SLOW TEST:31.413 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":243,"skipped":4078,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:21:26.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 22 00:21:26.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5505'
Aug 22 00:21:26.329: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 22 00:21:26.329: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Aug 22 00:21:28.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5505'
Aug 22 00:21:28.585: INFO: stderr: ""
Aug 22 00:21:28.585: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:21:28.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5505" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":244,"skipped":4086,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:21:28.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:21:39.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4035" for this suite.

• [SLOW TEST:11.293 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":245,"skipped":4099,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:21:39.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:21:40.614: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:21:42.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652500, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652500, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652501, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652500, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:21:44.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652500, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652500, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652501, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652500, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:21:47.788: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:21:47.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4405" for this suite.
STEP: Destroying namespace "webhook-4405-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.133 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":246,"skipped":4100,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:21:48.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-8870
STEP: creating replication controller nodeport-test in namespace services-8870
I0822 00:21:48.150767       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8870, replica count: 2
I0822 00:21:51.201201       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0822 00:21:54.201476       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 22 00:21:54.201: INFO: Creating new exec pod
Aug 22 00:21:59.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8870 execpodlz9pk -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 22 00:21:59.462: INFO: stderr: "I0822 00:21:59.361370    4487 log.go:172] (0xc0009700b0) (0xc0009540a0) Create stream\nI0822 00:21:59.361436    4487 log.go:172] (0xc0009700b0) (0xc0009540a0) Stream added, broadcasting: 1\nI0822 00:21:59.363323    4487 log.go:172] (0xc0009700b0) Reply frame received for 1\nI0822 00:21:59.363366    4487 log.go:172] (0xc0009700b0) (0xc000a46000) Create stream\nI0822 00:21:59.363455    4487 log.go:172] (0xc0009700b0) (0xc000a46000) Stream added, broadcasting: 3\nI0822 00:21:59.364370    4487 log.go:172] (0xc0009700b0) Reply frame received for 3\nI0822 00:21:59.364409    4487 log.go:172] (0xc0009700b0) (0xc000a460a0) Create stream\nI0822 00:21:59.364421    4487 log.go:172] (0xc0009700b0) (0xc000a460a0) Stream added, broadcasting: 5\nI0822 00:21:59.365323    4487 log.go:172] (0xc0009700b0) Reply frame received for 5\nI0822 00:21:59.451553    4487 log.go:172] (0xc0009700b0) Data frame received for 3\nI0822 00:21:59.451593    4487 log.go:172] (0xc000a46000) (3) Data frame handling\nI0822 00:21:59.451614    4487 log.go:172] (0xc0009700b0) Data frame received for 5\nI0822 00:21:59.451625    4487 log.go:172] (0xc000a460a0) (5) Data frame handling\nI0822 00:21:59.451640    4487 log.go:172] (0xc000a460a0) (5) Data frame sent\nI0822 00:21:59.451647    4487 log.go:172] (0xc0009700b0) Data frame received for 5\nI0822 00:21:59.451653    4487 log.go:172] (0xc000a460a0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0822 00:21:59.453455    4487 log.go:172] (0xc0009700b0) Data frame received for 1\nI0822 00:21:59.453481    4487 log.go:172] (0xc0009540a0) (1) Data frame handling\nI0822 00:21:59.453504    4487 log.go:172] (0xc0009540a0) (1) Data frame sent\nI0822 00:21:59.453553    4487 log.go:172] (0xc0009700b0) (0xc0009540a0) Stream removed, broadcasting: 1\nI0822 00:21:59.453626    4487 log.go:172] (0xc0009700b0) Go away received\nI0822 00:21:59.453922    4487 log.go:172] (0xc0009700b0) (0xc0009540a0) Stream removed, broadcasting: 1\nI0822 00:21:59.453945    4487 log.go:172] (0xc0009700b0) (0xc000a46000) Stream removed, broadcasting: 3\nI0822 00:21:59.453956    4487 log.go:172] (0xc0009700b0) (0xc000a460a0) Stream removed, broadcasting: 5\n"
Aug 22 00:21:59.462: INFO: stdout: ""
Aug 22 00:21:59.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8870 execpodlz9pk -- /bin/sh -x -c nc -zv -t -w 2 10.101.42.29 80'
Aug 22 00:21:59.667: INFO: stderr: "I0822 00:21:59.590021    4511 log.go:172] (0xc000a6eb00) (0xc000778aa0) Create stream\nI0822 00:21:59.590079    4511 log.go:172] (0xc000a6eb00) (0xc000778aa0) Stream added, broadcasting: 1\nI0822 00:21:59.591763    4511 log.go:172] (0xc000a6eb00) Reply frame received for 1\nI0822 00:21:59.591795    4511 log.go:172] (0xc000a6eb00) (0xc0009320a0) Create stream\nI0822 00:21:59.591805    4511 log.go:172] (0xc000a6eb00) (0xc0009320a0) Stream added, broadcasting: 3\nI0822 00:21:59.592835    4511 log.go:172] (0xc000a6eb00) Reply frame received for 3\nI0822 00:21:59.592872    4511 log.go:172] (0xc000a6eb00) (0xc000778b40) Create stream\nI0822 00:21:59.592888    4511 log.go:172] (0xc000a6eb00) (0xc000778b40) Stream added, broadcasting: 5\nI0822 00:21:59.593734    4511 log.go:172] (0xc000a6eb00) Reply frame received for 5\nI0822 00:21:59.657187    4511 log.go:172] (0xc000a6eb00) Data frame received for 5\nI0822 00:21:59.657212    4511 log.go:172] (0xc000778b40) (5) Data frame handling\nI0822 00:21:59.657227    4511 log.go:172] (0xc000778b40) (5) Data frame sent\nI0822 00:21:59.657238    4511 log.go:172] (0xc000a6eb00) Data frame received for 5\nI0822 00:21:59.657245    4511 log.go:172] (0xc000778b40) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.42.29 80\nConnection to 10.101.42.29 80 port [tcp/http] succeeded!\nI0822 00:21:59.657432    4511 log.go:172] (0xc000a6eb00) Data frame received for 3\nI0822 00:21:59.657459    4511 log.go:172] (0xc0009320a0) (3) Data frame handling\nI0822 00:21:59.661172    4511 log.go:172] (0xc000a6eb00) Data frame received for 1\nI0822 00:21:59.661193    4511 log.go:172] (0xc000778aa0) (1) Data frame handling\nI0822 00:21:59.661200    4511 log.go:172] (0xc000778aa0) (1) Data frame sent\nI0822 00:21:59.661209    4511 log.go:172] (0xc000a6eb00) (0xc000778aa0) Stream removed, broadcasting: 1\nI0822 00:21:59.661224    4511 log.go:172] (0xc000a6eb00) Go away received\nI0822 00:21:59.661487    4511 log.go:172] (0xc000a6eb00) (0xc000778aa0) Stream removed, broadcasting: 1\nI0822 00:21:59.661506    4511 log.go:172] (0xc000a6eb00) (0xc0009320a0) Stream removed, broadcasting: 3\nI0822 00:21:59.661518    4511 log.go:172] (0xc000a6eb00) (0xc000778b40) Stream removed, broadcasting: 5\n"
Aug 22 00:21:59.667: INFO: stdout: ""
Aug 22 00:21:59.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8870 execpodlz9pk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30787'
Aug 22 00:21:59.872: INFO: stderr: "I0822 00:21:59.800654    4531 log.go:172] (0xc0001149a0) (0xc0009a2000) Create stream\nI0822 00:21:59.800718    4531 log.go:172] (0xc0001149a0) (0xc0009a2000) Stream added, broadcasting: 1\nI0822 00:21:59.804036    4531 log.go:172] (0xc0001149a0) Reply frame received for 1\nI0822 00:21:59.804090    4531 log.go:172] (0xc0001149a0) (0xc0009a20a0) Create stream\nI0822 00:21:59.804113    4531 log.go:172] (0xc0001149a0) (0xc0009a20a0) Stream added, broadcasting: 3\nI0822 00:21:59.805570    4531 log.go:172] (0xc0001149a0) Reply frame received for 3\nI0822 00:21:59.805611    4531 log.go:172] (0xc0001149a0) (0xc0006c39a0) Create stream\nI0822 00:21:59.805625    4531 log.go:172] (0xc0001149a0) (0xc0006c39a0) Stream added, broadcasting: 5\nI0822 00:21:59.807012    4531 log.go:172] (0xc0001149a0) Reply frame received for 5\nI0822 00:21:59.862644    4531 log.go:172] (0xc0001149a0) Data frame received for 5\nI0822 00:21:59.862687    4531 log.go:172] (0xc0006c39a0) (5) Data frame handling\nI0822 00:21:59.862707    4531 log.go:172] (0xc0006c39a0) (5) Data frame sent\nI0822 00:21:59.862717    4531 log.go:172] (0xc0001149a0) Data frame received for 5\nI0822 00:21:59.862726    4531 log.go:172] (0xc0006c39a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 30787\nConnection to 172.18.0.6 30787 port [tcp/30787] succeeded!\nI0822 00:21:59.862747    4531 log.go:172] (0xc0001149a0) Data frame received for 3\nI0822 00:21:59.862768    4531 log.go:172] (0xc0009a20a0) (3) Data frame handling\nI0822 00:21:59.864120    4531 log.go:172] (0xc0001149a0) Data frame received for 1\nI0822 00:21:59.864134    4531 log.go:172] (0xc0009a2000) (1) Data frame handling\nI0822 00:21:59.864153    4531 log.go:172] (0xc0009a2000) (1) Data frame sent\nI0822 00:21:59.864384    4531 log.go:172] (0xc0001149a0) (0xc0009a2000) Stream removed, broadcasting: 1\nI0822 00:21:59.864411    4531 log.go:172] (0xc0001149a0) Go away received\nI0822 00:21:59.864916    4531 log.go:172] (0xc0001149a0) (0xc0009a2000) Stream removed, broadcasting: 1\nI0822 00:21:59.864944    4531 log.go:172] (0xc0001149a0) (0xc0009a20a0) Stream removed, broadcasting: 3\nI0822 00:21:59.864954    4531 log.go:172] (0xc0001149a0) (0xc0006c39a0) Stream removed, broadcasting: 5\n"
Aug 22 00:21:59.872: INFO: stdout: ""
Aug 22 00:21:59.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8870 execpodlz9pk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 30787'
Aug 22 00:22:00.089: INFO: stderr: "I0822 00:22:00.006947    4552 log.go:172] (0xc000b0a9a0) (0xc0006cfae0) Create stream\nI0822 00:22:00.006996    4552 log.go:172] (0xc000b0a9a0) (0xc0006cfae0) Stream added, broadcasting: 1\nI0822 00:22:00.009369    4552 log.go:172] (0xc000b0a9a0) Reply frame received for 1\nI0822 00:22:00.009415    4552 log.go:172] (0xc000b0a9a0) (0xc0008d4000) Create stream\nI0822 00:22:00.009432    4552 log.go:172] (0xc000b0a9a0) (0xc0008d4000) Stream added, broadcasting: 3\nI0822 00:22:00.010417    4552 log.go:172] (0xc000b0a9a0) Reply frame received for 3\nI0822 00:22:00.010444    4552 log.go:172] (0xc000b0a9a0) (0xc0006cfcc0) Create stream\nI0822 00:22:00.010452    4552 log.go:172] (0xc000b0a9a0) (0xc0006cfcc0) Stream added, broadcasting: 5\nI0822 00:22:00.011264    4552 log.go:172] (0xc000b0a9a0) Reply frame received for 5\nI0822 00:22:00.077363    4552 log.go:172] (0xc000b0a9a0) Data frame received for 3\nI0822 00:22:00.077392    4552 log.go:172] (0xc0008d4000) (3) Data frame handling\nI0822 00:22:00.077443    4552 log.go:172] (0xc000b0a9a0) Data frame received for 5\nI0822 00:22:00.077466    4552 log.go:172] (0xc0006cfcc0) (5) Data frame handling\nI0822 00:22:00.077481    4552 log.go:172] (0xc0006cfcc0) (5) Data frame sent\nI0822 00:22:00.077504    4552 log.go:172] (0xc000b0a9a0) Data frame received for 5\nI0822 00:22:00.077514    4552 log.go:172] (0xc0006cfcc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.3 30787\nConnection to 172.18.0.3 30787 port [tcp/30787] succeeded!\nI0822 00:22:00.080058    4552 log.go:172] (0xc000b0a9a0) Data frame received for 1\nI0822 00:22:00.080101    4552 log.go:172] (0xc0006cfae0) (1) Data frame handling\nI0822 00:22:00.080119    4552 log.go:172] (0xc0006cfae0) (1) Data frame sent\nI0822 00:22:00.080136    4552 log.go:172] (0xc000b0a9a0) (0xc0006cfae0) Stream removed, broadcasting: 1\nI0822 00:22:00.080315    4552 log.go:172] (0xc000b0a9a0) Go away received\nI0822 00:22:00.080553    4552 log.go:172] (0xc000b0a9a0) (0xc0006cfae0) Stream removed, broadcasting: 1\nI0822 00:22:00.080580    4552 log.go:172] (0xc000b0a9a0) (0xc0008d4000) Stream removed, broadcasting: 3\nI0822 00:22:00.080593    4552 log.go:172] (0xc000b0a9a0) (0xc0006cfcc0) Stream removed, broadcasting: 5\n"
Aug 22 00:22:00.089: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:22:00.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8870" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.076 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":247,"skipped":4110,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:22:00.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:22:00.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-982" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":248,"skipped":4125,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:22:00.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:22:00.612: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:22:02.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652520, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652520, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652520, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652520, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:22:05.746: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:22:06.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1196" for this suite.
STEP: Destroying namespace "webhook-1196-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.783 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":249,"skipped":4193,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:22:06.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 22 00:22:08.037: INFO: Waiting up to 5m0s for pod "pod-9ba9077c-bd7f-4f80-8f4c-cf6c4456ac4c" in namespace "emptydir-2884" to be "success or failure"
Aug 22 00:22:08.431: INFO: Pod "pod-9ba9077c-bd7f-4f80-8f4c-cf6c4456ac4c": Phase="Pending", Reason="", readiness=false. Elapsed: 393.665287ms
Aug 22 00:22:10.435: INFO: Pod "pod-9ba9077c-bd7f-4f80-8f4c-cf6c4456ac4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397613088s
Aug 22 00:22:12.438: INFO: Pod "pod-9ba9077c-bd7f-4f80-8f4c-cf6c4456ac4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.401032752s
STEP: Saw pod success
Aug 22 00:22:12.438: INFO: Pod "pod-9ba9077c-bd7f-4f80-8f4c-cf6c4456ac4c" satisfied condition "success or failure"
Aug 22 00:22:12.440: INFO: Trying to get logs from node jerma-worker2 pod pod-9ba9077c-bd7f-4f80-8f4c-cf6c4456ac4c container test-container: 
STEP: delete the pod
Aug 22 00:22:12.483: INFO: Waiting for pod pod-9ba9077c-bd7f-4f80-8f4c-cf6c4456ac4c to disappear
Aug 22 00:22:12.499: INFO: Pod pod-9ba9077c-bd7f-4f80-8f4c-cf6c4456ac4c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:22:12.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2884" for this suite.

• [SLOW TEST:5.534 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4198,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:22:12.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0822 00:22:42.970347       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 22 00:22:42.970: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:22:42.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1289" for this suite.

• [SLOW TEST:30.470 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":251,"skipped":4226,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:22:42.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-baa1e815-799a-407b-9f5a-c44b107cdb9a
STEP: Creating a pod to test consume secrets
Aug 22 00:22:43.055: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1a62579d-e54e-4386-9087-4c08d89a6b3c" in namespace "projected-1203" to be "success or failure"
Aug 22 00:22:43.058: INFO: Pod "pod-projected-secrets-1a62579d-e54e-4386-9087-4c08d89a6b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606426ms
Aug 22 00:22:45.261: INFO: Pod "pod-projected-secrets-1a62579d-e54e-4386-9087-4c08d89a6b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20639522s
Aug 22 00:22:47.265: INFO: Pod "pod-projected-secrets-1a62579d-e54e-4386-9087-4c08d89a6b3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209947334s
STEP: Saw pod success
Aug 22 00:22:47.265: INFO: Pod "pod-projected-secrets-1a62579d-e54e-4386-9087-4c08d89a6b3c" satisfied condition "success or failure"
Aug 22 00:22:47.267: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-1a62579d-e54e-4386-9087-4c08d89a6b3c container projected-secret-volume-test: 
STEP: delete the pod
Aug 22 00:22:47.309: INFO: Waiting for pod pod-projected-secrets-1a62579d-e54e-4386-9087-4c08d89a6b3c to disappear
Aug 22 00:22:47.325: INFO: Pod pod-projected-secrets-1a62579d-e54e-4386-9087-4c08d89a6b3c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:22:47.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1203" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4256,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:22:47.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-533
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 22 00:22:47.471: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 22 00:23:07.671: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.46 8081 | grep -v '^\s*$'] Namespace:pod-network-test-533 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 00:23:07.671: INFO: >>> kubeConfig: /root/.kube/config
I0822 00:23:07.706018       6 log.go:172] (0xc002ec8790) (0xc000536dc0) Create stream
I0822 00:23:07.706054       6 log.go:172] (0xc002ec8790) (0xc000536dc0) Stream added, broadcasting: 1
I0822 00:23:07.707908       6 log.go:172] (0xc002ec8790) Reply frame received for 1
I0822 00:23:07.707945       6 log.go:172] (0xc002ec8790) (0xc000920280) Create stream
I0822 00:23:07.707956       6 log.go:172] (0xc002ec8790) (0xc000920280) Stream added, broadcasting: 3
I0822 00:23:07.708916       6 log.go:172] (0xc002ec8790) Reply frame received for 3
I0822 00:23:07.708937       6 log.go:172] (0xc002ec8790) (0xc000537860) Create stream
I0822 00:23:07.708943       6 log.go:172] (0xc002ec8790) (0xc000537860) Stream added, broadcasting: 5
I0822 00:23:07.709711       6 log.go:172] (0xc002ec8790) Reply frame received for 5
I0822 00:23:08.800708       6 log.go:172] (0xc002ec8790) Data frame received for 5
I0822 00:23:08.800863       6 log.go:172] (0xc000537860) (5) Data frame handling
I0822 00:23:08.800903       6 log.go:172] (0xc002ec8790) Data frame received for 3
I0822 00:23:08.800926       6 log.go:172] (0xc000920280) (3) Data frame handling
I0822 00:23:08.800956       6 log.go:172] (0xc000920280) (3) Data frame sent
I0822 00:23:08.800979       6 log.go:172] (0xc002ec8790) Data frame received for 3
I0822 00:23:08.801001       6 log.go:172] (0xc000920280) (3) Data frame handling
I0822 00:23:08.803394       6 log.go:172] (0xc002ec8790) Data frame received for 1
I0822 00:23:08.803437       6 log.go:172] (0xc000536dc0) (1) Data frame handling
I0822 00:23:08.803478       6 log.go:172] (0xc000536dc0) (1) Data frame sent
I0822 00:23:08.803506       6 log.go:172] (0xc002ec8790) (0xc000536dc0) Stream removed, broadcasting: 1
I0822 00:23:08.803605       6 log.go:172] (0xc002ec8790) (0xc000536dc0) Stream removed, broadcasting: 1
I0822 00:23:08.803621       6 log.go:172] (0xc002ec8790) (0xc000920280) Stream removed, broadcasting: 3
I0822 00:23:08.803699       6 log.go:172] (0xc002ec8790) Go away received
I0822 00:23:08.803854       6 log.go:172] (0xc002ec8790) (0xc000537860) Stream removed, broadcasting: 5
Aug 22 00:23:08.803: INFO: Found all expected endpoints: [netserver-0]
Aug 22 00:23:08.807: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.37 8081 | grep -v '^\s*$'] Namespace:pod-network-test-533 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 00:23:08.807: INFO: >>> kubeConfig: /root/.kube/config
I0822 00:23:08.840629       6 log.go:172] (0xc002ec8d10) (0xc001140b40) Create stream
I0822 00:23:08.840656       6 log.go:172] (0xc002ec8d10) (0xc001140b40) Stream added, broadcasting: 1
I0822 00:23:08.842897       6 log.go:172] (0xc002ec8d10) Reply frame received for 1
I0822 00:23:08.842920       6 log.go:172] (0xc002ec8d10) (0xc002db7e00) Create stream
I0822 00:23:08.842931       6 log.go:172] (0xc002ec8d10) (0xc002db7e00) Stream added, broadcasting: 3
I0822 00:23:08.843863       6 log.go:172] (0xc002ec8d10) Reply frame received for 3
I0822 00:23:08.843892       6 log.go:172] (0xc002ec8d10) (0xc002db7ea0) Create stream
I0822 00:23:08.843901       6 log.go:172] (0xc002ec8d10) (0xc002db7ea0) Stream added, broadcasting: 5
I0822 00:23:08.844695       6 log.go:172] (0xc002ec8d10) Reply frame received for 5
I0822 00:23:09.915280       6 log.go:172] (0xc002ec8d10) Data frame received for 5
I0822 00:23:09.915317       6 log.go:172] (0xc002db7ea0) (5) Data frame handling
I0822 00:23:09.915349       6 log.go:172] (0xc002ec8d10) Data frame received for 3
I0822 00:23:09.915380       6 log.go:172] (0xc002db7e00) (3) Data frame handling
I0822 00:23:09.915417       6 log.go:172] (0xc002db7e00) (3) Data frame sent
I0822 00:23:09.915454       6 log.go:172] (0xc002ec8d10) Data frame received for 3
I0822 00:23:09.915478       6 log.go:172] (0xc002db7e00) (3) Data frame handling
I0822 00:23:09.917577       6 log.go:172] (0xc002ec8d10) Data frame received for 1
I0822 00:23:09.917604       6 log.go:172] (0xc001140b40) (1) Data frame handling
I0822 00:23:09.917629       6 log.go:172] (0xc001140b40) (1) Data frame sent
I0822 00:23:09.917644       6 log.go:172] (0xc002ec8d10) (0xc001140b40) Stream removed, broadcasting: 1
I0822 00:23:09.917720       6 log.go:172] (0xc002ec8d10) Go away received
I0822 00:23:09.917842       6 log.go:172] (0xc002ec8d10) (0xc001140b40) Stream removed, broadcasting: 1
I0822 00:23:09.917894       6 log.go:172] (0xc002ec8d10) (0xc002db7e00) Stream removed, broadcasting: 3
I0822 00:23:09.917919       6 log.go:172] (0xc002ec8d10) (0xc002db7ea0) Stream removed, broadcasting: 5
Aug 22 00:23:09.917: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:23:09.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-533" for this suite.

• [SLOW TEST:22.551 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4263,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:23:09.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 22 00:23:10.213: INFO: Waiting up to 5m0s for pod "pod-9de84cfd-f243-45d1-b306-24673caf4cbb" in namespace "emptydir-2659" to be "success or failure"
Aug 22 00:23:10.215: INFO: Pod "pod-9de84cfd-f243-45d1-b306-24673caf4cbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382724ms
Aug 22 00:23:12.220: INFO: Pod "pod-9de84cfd-f243-45d1-b306-24673caf4cbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006780318s
Aug 22 00:23:14.224: INFO: Pod "pod-9de84cfd-f243-45d1-b306-24673caf4cbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010498698s
STEP: Saw pod success
Aug 22 00:23:14.224: INFO: Pod "pod-9de84cfd-f243-45d1-b306-24673caf4cbb" satisfied condition "success or failure"
Aug 22 00:23:14.226: INFO: Trying to get logs from node jerma-worker2 pod pod-9de84cfd-f243-45d1-b306-24673caf4cbb container test-container: 
STEP: delete the pod
Aug 22 00:23:14.286: INFO: Waiting for pod pod-9de84cfd-f243-45d1-b306-24673caf4cbb to disappear
Aug 22 00:23:14.453: INFO: Pod pod-9de84cfd-f243-45d1-b306-24673caf4cbb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:23:14.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2659" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4273,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:23:14.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:23:16.549: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:23:18.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652596, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652596, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652596, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652596, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:23:21.583: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:23:22.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5919" for this suite.
STEP: Destroying namespace "webhook-5919-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.336 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":255,"skipped":4279,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:23:22.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:23:22.935: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 22 00:23:27.996: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 22 00:23:27.996: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 22 00:23:29.999: INFO: Creating deployment "test-rollover-deployment"
Aug 22 00:23:30.012: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 22 00:23:32.018: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 22 00:23:32.025: INFO: Ensure that both replica sets have 1 created replica
Aug 22 00:23:32.031: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 22 00:23:32.037: INFO: Updating deployment test-rollover-deployment
Aug 22 00:23:32.037: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 22 00:23:34.218: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 22 00:23:34.225: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 22 00:23:34.231: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 00:23:34.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652612, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:23:36.238: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 00:23:36.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652615, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:23:38.239: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 00:23:38.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652615, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:23:40.237: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 00:23:40.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652615, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:23:42.238: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 00:23:42.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652615, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:23:44.239: INFO: all replica sets need to contain the pod-template-hash label
Aug 22 00:23:44.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652615, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652610, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:23:46.247: INFO: 
Aug 22 00:23:46.247: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 22 00:23:46.253: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-5300 /apis/apps/v1/namespaces/deployment-5300/deployments/test-rollover-deployment 6bfab89b-00db-4d72-b0a9-c021a05b6589 2299175 2 2020-08-22 00:23:30 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005956208  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-22 00:23:30 +0000 UTC,LastTransitionTime:2020-08-22 00:23:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-22 00:23:46 +0000 UTC,LastTransitionTime:2020-08-22 00:23:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 22 00:23:46.256: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-5300 /apis/apps/v1/namespaces/deployment-5300/replicasets/test-rollover-deployment-574d6dfbff 0ca80ba4-4453-4719-a5e0-434e865553ab 2299160 2 2020-08-22 00:23:32 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 6bfab89b-00db-4d72-b0a9-c021a05b6589 0xc0059566f7 0xc0059566f8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005956768  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 22 00:23:46.256: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 22 00:23:46.256: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-5300 /apis/apps/v1/namespaces/deployment-5300/replicasets/test-rollover-controller c061dcce-c9a5-47fb-b9fd-a3ac8a45f081 2299173 2 2020-08-22 00:23:22 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 6bfab89b-00db-4d72-b0a9-c021a05b6589 0xc005956617 0xc005956618}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005956678  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 00:23:46.256: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-5300 /apis/apps/v1/namespaces/deployment-5300/replicasets/test-rollover-deployment-f6c94f66c 7181b1c1-f37d-4a16-b0b6-8fd67e1c3a09 2299108 2 2020-08-22 00:23:30 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 6bfab89b-00db-4d72-b0a9-c021a05b6589 0xc0059567d0 0xc0059567d1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005956848  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 22 00:23:46.259: INFO: Pod "test-rollover-deployment-574d6dfbff-knjk6" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-knjk6 test-rollover-deployment-574d6dfbff- deployment-5300 /api/v1/namespaces/deployment-5300/pods/test-rollover-deployment-574d6dfbff-knjk6 ea2d09d2-aa0b-4eb1-9bf4-46e1229904a2 2299132 0 2020-08-22 00:23:32 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 0ca80ba4-4453-4719-a5e0-434e865553ab 0xc005956d77 0xc005956d78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sc6dc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sc6dc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sc6dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:23:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:23:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:23:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-22 00:23:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.41,StartTime:2020-08-22 00:23:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-22 00:23:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c7c859a041ddcf11456201d3b45637252f9a6409cde271237e4bfddc035e017b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:23:46.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5300" for this suite.

• [SLOW TEST:23.465 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":256,"skipped":4284,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:23:46.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:23:51.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3935" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":257,"skipped":4353,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:23:51.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 22 00:23:52.288: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 22 00:23:54.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652632, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652632, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652632, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652632, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 22 00:23:57.353: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 22 00:24:01.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-7188 to-be-attached-pod -i -c=container1'
Aug 22 00:24:01.532: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:24:01.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7188" for this suite.
STEP: Destroying namespace "webhook-7188-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.473 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":258,"skipped":4367,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:24:01.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 22 00:24:01.793: INFO: Waiting up to 5m0s for pod "pod-9021a1a7-5563-486b-bf3e-e0c7c8598b60" in namespace "emptydir-2625" to be "success or failure"
Aug 22 00:24:02.088: INFO: Pod "pod-9021a1a7-5563-486b-bf3e-e0c7c8598b60": Phase="Pending", Reason="", readiness=false. Elapsed: 295.087642ms
Aug 22 00:24:04.140: INFO: Pod "pod-9021a1a7-5563-486b-bf3e-e0c7c8598b60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346819828s
Aug 22 00:24:06.143: INFO: Pod "pod-9021a1a7-5563-486b-bf3e-e0c7c8598b60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350166994s
Aug 22 00:24:08.171: INFO: Pod "pod-9021a1a7-5563-486b-bf3e-e0c7c8598b60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.378671261s
STEP: Saw pod success
Aug 22 00:24:08.172: INFO: Pod "pod-9021a1a7-5563-486b-bf3e-e0c7c8598b60" satisfied condition "success or failure"
Aug 22 00:24:08.223: INFO: Trying to get logs from node jerma-worker2 pod pod-9021a1a7-5563-486b-bf3e-e0c7c8598b60 container test-container: 
STEP: delete the pod
Aug 22 00:24:08.249: INFO: Waiting for pod pod-9021a1a7-5563-486b-bf3e-e0c7c8598b60 to disappear
Aug 22 00:24:08.259: INFO: Pod pod-9021a1a7-5563-486b-bf3e-e0c7c8598b60 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:24:08.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2625" for this suite.

• [SLOW TEST:6.745 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4387,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:24:08.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 00:24:08.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f4a7cad-412d-43a7-bc75-0bc979456a08" in namespace "downward-api-4944" to be "success or failure"
Aug 22 00:24:08.867: INFO: Pod "downwardapi-volume-8f4a7cad-412d-43a7-bc75-0bc979456a08": Phase="Pending", Reason="", readiness=false. Elapsed: 140.720735ms
Aug 22 00:24:10.871: INFO: Pod "downwardapi-volume-8f4a7cad-412d-43a7-bc75-0bc979456a08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144554916s
Aug 22 00:24:12.875: INFO: Pod "downwardapi-volume-8f4a7cad-412d-43a7-bc75-0bc979456a08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149088685s
STEP: Saw pod success
Aug 22 00:24:12.875: INFO: Pod "downwardapi-volume-8f4a7cad-412d-43a7-bc75-0bc979456a08" satisfied condition "success or failure"
Aug 22 00:24:12.878: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8f4a7cad-412d-43a7-bc75-0bc979456a08 container client-container: 
STEP: delete the pod
Aug 22 00:24:12.895: INFO: Waiting for pod downwardapi-volume-8f4a7cad-412d-43a7-bc75-0bc979456a08 to disappear
Aug 22 00:24:12.900: INFO: Pod downwardapi-volume-8f4a7cad-412d-43a7-bc75-0bc979456a08 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:24:12.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4944" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4391,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:24:12.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 00:24:13.043: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b1ad861-b3cf-41ff-ab4a-b8830525c2bb" in namespace "downward-api-3410" to be "success or failure"
Aug 22 00:24:13.059: INFO: Pod "downwardapi-volume-9b1ad861-b3cf-41ff-ab4a-b8830525c2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.174651ms
Aug 22 00:24:15.062: INFO: Pod "downwardapi-volume-9b1ad861-b3cf-41ff-ab4a-b8830525c2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019513558s
Aug 22 00:24:17.135: INFO: Pod "downwardapi-volume-9b1ad861-b3cf-41ff-ab4a-b8830525c2bb": Phase="Running", Reason="", readiness=true. Elapsed: 4.092481044s
Aug 22 00:24:19.140: INFO: Pod "downwardapi-volume-9b1ad861-b3cf-41ff-ab4a-b8830525c2bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096769587s
STEP: Saw pod success
Aug 22 00:24:19.140: INFO: Pod "downwardapi-volume-9b1ad861-b3cf-41ff-ab4a-b8830525c2bb" satisfied condition "success or failure"
Aug 22 00:24:19.143: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9b1ad861-b3cf-41ff-ab4a-b8830525c2bb container client-container: 
STEP: delete the pod
Aug 22 00:24:19.188: INFO: Waiting for pod downwardapi-volume-9b1ad861-b3cf-41ff-ab4a-b8830525c2bb to disappear
Aug 22 00:24:19.203: INFO: Pod downwardapi-volume-9b1ad861-b3cf-41ff-ab4a-b8830525c2bb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:24:19.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3410" for this suite.

• [SLOW TEST:6.304 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4395,"failed":0}
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:24:19.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 22 00:24:19.290: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Aug 22 00:24:20.021: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 22 00:24:22.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652660, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652660, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652660, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652660, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:24:24.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652660, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652660, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652660, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733652660, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 22 00:24:27.224: INFO: Waited 623.892981ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:24:30.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4609" for this suite.

• [SLOW TEST:11.059 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":262,"skipped":4395,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:24:30.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:24:41.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5054" for this suite.

• [SLOW TEST:11.630 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":263,"skipped":4417,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:24:41.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:24:42.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8235" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":264,"skipped":4425,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:24:42.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8551
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 22 00:24:42.683: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 22 00:25:04.813: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.48:8080/dial?request=hostname&protocol=http&host=10.244.2.50&port=8080&tries=1'] Namespace:pod-network-test-8551 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 00:25:04.813: INFO: >>> kubeConfig: /root/.kube/config
I0822 00:25:04.850999       6 log.go:172] (0xc003be2160) (0xc0029541e0) Create stream
I0822 00:25:04.851027       6 log.go:172] (0xc003be2160) (0xc0029541e0) Stream added, broadcasting: 1
I0822 00:25:04.852929       6 log.go:172] (0xc003be2160) Reply frame received for 1
I0822 00:25:04.852981       6 log.go:172] (0xc003be2160) (0xc002954280) Create stream
I0822 00:25:04.852999       6 log.go:172] (0xc003be2160) (0xc002954280) Stream added, broadcasting: 3
I0822 00:25:04.854062       6 log.go:172] (0xc003be2160) Reply frame received for 3
I0822 00:25:04.854093       6 log.go:172] (0xc003be2160) (0xc0029543c0) Create stream
I0822 00:25:04.854103       6 log.go:172] (0xc003be2160) (0xc0029543c0) Stream added, broadcasting: 5
I0822 00:25:04.854976       6 log.go:172] (0xc003be2160) Reply frame received for 5
I0822 00:25:04.960108       6 log.go:172] (0xc003be2160) Data frame received for 3
I0822 00:25:04.960132       6 log.go:172] (0xc002954280) (3) Data frame handling
I0822 00:25:04.960146       6 log.go:172] (0xc002954280) (3) Data frame sent
I0822 00:25:04.960484       6 log.go:172] (0xc003be2160) Data frame received for 3
I0822 00:25:04.960510       6 log.go:172] (0xc002954280) (3) Data frame handling
I0822 00:25:04.960837       6 log.go:172] (0xc003be2160) Data frame received for 5
I0822 00:25:04.960880       6 log.go:172] (0xc0029543c0) (5) Data frame handling
I0822 00:25:04.962199       6 log.go:172] (0xc003be2160) Data frame received for 1
I0822 00:25:04.962220       6 log.go:172] (0xc0029541e0) (1) Data frame handling
I0822 00:25:04.962237       6 log.go:172] (0xc0029541e0) (1) Data frame sent
I0822 00:25:04.962262       6 log.go:172] (0xc003be2160) (0xc0029541e0) Stream removed, broadcasting: 1
I0822 00:25:04.962277       6 log.go:172] (0xc003be2160) Go away received
I0822 00:25:04.962524       6 log.go:172] (0xc003be2160) (0xc0029541e0) Stream removed, broadcasting: 1
I0822 00:25:04.962553       6 log.go:172] (0xc003be2160) (0xc002954280) Stream removed, broadcasting: 3
I0822 00:25:04.962579       6 log.go:172] (0xc003be2160) (0xc0029543c0) Stream removed, broadcasting: 5
Aug 22 00:25:04.962: INFO: Waiting for responses: map[]
Aug 22 00:25:04.966: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.48:8080/dial?request=hostname&protocol=http&host=10.244.1.47&port=8080&tries=1'] Namespace:pod-network-test-8551 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 00:25:04.966: INFO: >>> kubeConfig: /root/.kube/config
I0822 00:25:04.998164       6 log.go:172] (0xc001340dc0) (0xc002071ae0) Create stream
I0822 00:25:04.998190       6 log.go:172] (0xc001340dc0) (0xc002071ae0) Stream added, broadcasting: 1
I0822 00:25:04.999954       6 log.go:172] (0xc001340dc0) Reply frame received for 1
I0822 00:25:05.000032       6 log.go:172] (0xc001340dc0) (0xc002a6fcc0) Create stream
I0822 00:25:05.000050       6 log.go:172] (0xc001340dc0) (0xc002a6fcc0) Stream added, broadcasting: 3
I0822 00:25:05.001042       6 log.go:172] (0xc001340dc0) Reply frame received for 3
I0822 00:25:05.001073       6 log.go:172] (0xc001340dc0) (0xc002954500) Create stream
I0822 00:25:05.001084       6 log.go:172] (0xc001340dc0) (0xc002954500) Stream added, broadcasting: 5
I0822 00:25:05.001867       6 log.go:172] (0xc001340dc0) Reply frame received for 5
I0822 00:25:05.079186       6 log.go:172] (0xc001340dc0) Data frame received for 3
I0822 00:25:05.079213       6 log.go:172] (0xc002a6fcc0) (3) Data frame handling
I0822 00:25:05.079232       6 log.go:172] (0xc002a6fcc0) (3) Data frame sent
I0822 00:25:05.080125       6 log.go:172] (0xc001340dc0) Data frame received for 3
I0822 00:25:05.080142       6 log.go:172] (0xc002a6fcc0) (3) Data frame handling
I0822 00:25:05.080160       6 log.go:172] (0xc001340dc0) Data frame received for 5
I0822 00:25:05.080183       6 log.go:172] (0xc002954500) (5) Data frame handling
I0822 00:25:05.082065       6 log.go:172] (0xc001340dc0) Data frame received for 1
I0822 00:25:05.082087       6 log.go:172] (0xc002071ae0) (1) Data frame handling
I0822 00:25:05.082095       6 log.go:172] (0xc002071ae0) (1) Data frame sent
I0822 00:25:05.082104       6 log.go:172] (0xc001340dc0) (0xc002071ae0) Stream removed, broadcasting: 1
I0822 00:25:05.082127       6 log.go:172] (0xc001340dc0) Go away received
I0822 00:25:05.082273       6 log.go:172] (0xc001340dc0) (0xc002071ae0) Stream removed, broadcasting: 1
I0822 00:25:05.082296       6 log.go:172] (0xc001340dc0) (0xc002a6fcc0) Stream removed, broadcasting: 3
I0822 00:25:05.082305       6 log.go:172] (0xc001340dc0) (0xc002954500) Stream removed, broadcasting: 5
Aug 22 00:25:05.082: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:25:05.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8551" for this suite.

• [SLOW TEST:22.954 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4431,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:25:05.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2390.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2390.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 00:25:11.483: INFO: DNS probes using dns-test-e5a03183-b204-44d4-ba98-6decb49c9f51 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2390.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2390.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 00:25:20.199: INFO: File wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local from pod  dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 22 00:25:20.202: INFO: File jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local from pod  dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 22 00:25:20.202: INFO: Lookups using dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 failed for: [wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local]

Aug 22 00:25:25.227: INFO: File wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local from pod  dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 22 00:25:25.230: INFO: File jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local from pod  dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 22 00:25:25.230: INFO: Lookups using dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 failed for: [wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local]

Aug 22 00:25:31.034: INFO: File wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local from pod  dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 22 00:25:31.093: INFO: File jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local from pod  dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 22 00:25:31.093: INFO: Lookups using dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 failed for: [wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local]

Aug 22 00:25:35.207: INFO: File wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local from pod  dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 22 00:25:35.210: INFO: File jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local from pod  dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 22 00:25:35.210: INFO: Lookups using dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 failed for: [wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local]

Aug 22 00:25:40.210: INFO: File jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local from pod  dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 22 00:25:40.210: INFO: Lookups using dns-2390/dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 failed for: [jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local]

Aug 22 00:25:45.210: INFO: DNS probes using dns-test-6455096c-4cc4-4ca6-8694-9acc039826e1 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2390.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2390.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2390.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2390.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 00:25:54.103: INFO: DNS probes using dns-test-b21b8944-b87e-43e4-a403-a277da186895 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:25:54.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2390" for this suite.

• [SLOW TEST:49.564 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":266,"skipped":4432,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:25:54.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 22 00:26:00.818: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1179 PodName:pod-sharedvolume-cf26bd60-c294-4236-aab7-41425dab8d0a ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 22 00:26:00.818: INFO: >>> kubeConfig: /root/.kube/config
I0822 00:26:00.854702       6 log.go:172] (0xc003b2a370) (0xc0020706e0) Create stream
I0822 00:26:00.854734       6 log.go:172] (0xc003b2a370) (0xc0020706e0) Stream added, broadcasting: 1
I0822 00:26:00.856633       6 log.go:172] (0xc003b2a370) Reply frame received for 1
I0822 00:26:00.856719       6 log.go:172] (0xc003b2a370) (0xc0023b4320) Create stream
I0822 00:26:00.856868       6 log.go:172] (0xc003b2a370) (0xc0023b4320) Stream added, broadcasting: 3
I0822 00:26:00.857983       6 log.go:172] (0xc003b2a370) Reply frame received for 3
I0822 00:26:00.858030       6 log.go:172] (0xc003b2a370) (0xc002070780) Create stream
I0822 00:26:00.858046       6 log.go:172] (0xc003b2a370) (0xc002070780) Stream added, broadcasting: 5
I0822 00:26:00.859031       6 log.go:172] (0xc003b2a370) Reply frame received for 5
I0822 00:26:00.950363       6 log.go:172] (0xc003b2a370) Data frame received for 5
I0822 00:26:00.950425       6 log.go:172] (0xc003b2a370) Data frame received for 3
I0822 00:26:00.950463       6 log.go:172] (0xc0023b4320) (3) Data frame handling
I0822 00:26:00.950482       6 log.go:172] (0xc0023b4320) (3) Data frame sent
I0822 00:26:00.950509       6 log.go:172] (0xc003b2a370) Data frame received for 3
I0822 00:26:00.950521       6 log.go:172] (0xc0023b4320) (3) Data frame handling
I0822 00:26:00.950541       6 log.go:172] (0xc002070780) (5) Data frame handling
I0822 00:26:00.952221       6 log.go:172] (0xc003b2a370) Data frame received for 1
I0822 00:26:00.952259       6 log.go:172] (0xc0020706e0) (1) Data frame handling
I0822 00:26:00.952292       6 log.go:172] (0xc0020706e0) (1) Data frame sent
I0822 00:26:00.952338       6 log.go:172] (0xc003b2a370) (0xc0020706e0) Stream removed, broadcasting: 1
I0822 00:26:00.952383       6 log.go:172] (0xc003b2a370) Go away received
I0822 00:26:00.952523       6 log.go:172] (0xc003b2a370) (0xc0020706e0) Stream removed, broadcasting: 1
I0822 00:26:00.952542       6 log.go:172] (0xc003b2a370) (0xc0023b4320) Stream removed, broadcasting: 3
I0822 00:26:00.952550       6 log.go:172] (0xc003b2a370) (0xc002070780) Stream removed, broadcasting: 5
Aug 22 00:26:00.952: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:26:00.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1179" for this suite.

• [SLOW TEST:6.308 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":267,"skipped":4433,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:26:00.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:26:01.036: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:26:01.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6679" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":268,"skipped":4434,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:26:01.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 22 00:26:01.748: INFO: Waiting up to 5m0s for pod "downward-api-7a2845b6-88cc-4811-9243-4b58f2c8b923" in namespace "downward-api-3492" to be "success or failure"
Aug 22 00:26:01.786: INFO: Pod "downward-api-7a2845b6-88cc-4811-9243-4b58f2c8b923": Phase="Pending", Reason="", readiness=false. Elapsed: 37.827993ms
Aug 22 00:26:03.790: INFO: Pod "downward-api-7a2845b6-88cc-4811-9243-4b58f2c8b923": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042059054s
Aug 22 00:26:05.794: INFO: Pod "downward-api-7a2845b6-88cc-4811-9243-4b58f2c8b923": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046161574s
STEP: Saw pod success
Aug 22 00:26:05.794: INFO: Pod "downward-api-7a2845b6-88cc-4811-9243-4b58f2c8b923" satisfied condition "success or failure"
Aug 22 00:26:05.797: INFO: Trying to get logs from node jerma-worker pod downward-api-7a2845b6-88cc-4811-9243-4b58f2c8b923 container dapi-container: 
STEP: delete the pod
Aug 22 00:26:05.846: INFO: Waiting for pod downward-api-7a2845b6-88cc-4811-9243-4b58f2c8b923 to disappear
Aug 22 00:26:05.854: INFO: Pod downward-api-7a2845b6-88cc-4811-9243-4b58f2c8b923 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:26:05.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3492" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4453,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:26:05.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 22 00:26:05.926: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:26:07.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8144" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":270,"skipped":4473,"failed":0}

------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:26:07.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-464.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-464.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-464.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-464.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-464.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-464.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-464.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 22 00:26:13.326: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:13.330: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:13.333: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:13.336: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:13.345: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:13.348: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:13.351: INFO: Unable to read jessie_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:13.354: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:13.360: INFO: Lookups using dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local]

Aug 22 00:26:18.365: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:18.368: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:18.371: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:18.374: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:18.383: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:18.386: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:18.389: INFO: Unable to read jessie_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:18.391: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:18.397: INFO: Lookups using dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local]

Aug 22 00:26:23.365: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:23.370: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:23.373: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:23.376: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:23.385: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:23.387: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:23.390: INFO: Unable to read jessie_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:23.393: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:23.399: INFO: Lookups using dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local]

Aug 22 00:26:28.365: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:28.368: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:28.371: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:28.373: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:28.382: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:28.385: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:28.388: INFO: Unable to read jessie_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:28.391: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:28.396: INFO: Lookups using dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local]

Aug 22 00:26:33.365: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:33.368: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:33.371: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:33.374: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:33.384: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:33.387: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:33.390: INFO: Unable to read jessie_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:33.393: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:33.399: INFO: Lookups using dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local]

Aug 22 00:26:38.402: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:38.405: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:38.409: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:38.412: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:38.420: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:38.422: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:38.424: INFO: Unable to read jessie_udp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:38.426: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local from pod dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d: the server could not find the requested resource (get pods dns-test-a3c913d9-ec1e-409f-9160-84409034e70d)
Aug 22 00:26:38.431: INFO: Lookups using dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local wheezy_udp@dns-test-service-2.dns-464.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-464.svc.cluster.local jessie_udp@dns-test-service-2.dns-464.svc.cluster.local jessie_tcp@dns-test-service-2.dns-464.svc.cluster.local]

Aug 22 00:26:43.398: INFO: DNS probes using dns-464/dns-test-a3c913d9-ec1e-409f-9160-84409034e70d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:26:43.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-464" for this suite.

• [SLOW TEST:36.633 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":271,"skipped":4473,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:26:43.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:27:01.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2951" for this suite.

• [SLOW TEST:17.389 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":272,"skipped":4477,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:27:01.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 00:27:01.191: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8abd7af9-0ef3-4dc6-a742-18944473610a" in namespace "projected-4110" to be "success or failure"
Aug 22 00:27:01.196: INFO: Pod "downwardapi-volume-8abd7af9-0ef3-4dc6-a742-18944473610a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259109ms
Aug 22 00:27:03.200: INFO: Pod "downwardapi-volume-8abd7af9-0ef3-4dc6-a742-18944473610a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008075598s
Aug 22 00:27:05.204: INFO: Pod "downwardapi-volume-8abd7af9-0ef3-4dc6-a742-18944473610a": Phase="Running", Reason="", readiness=true. Elapsed: 4.01249815s
Aug 22 00:27:07.208: INFO: Pod "downwardapi-volume-8abd7af9-0ef3-4dc6-a742-18944473610a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016978338s
STEP: Saw pod success
Aug 22 00:27:07.208: INFO: Pod "downwardapi-volume-8abd7af9-0ef3-4dc6-a742-18944473610a" satisfied condition "success or failure"
Aug 22 00:27:07.212: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8abd7af9-0ef3-4dc6-a742-18944473610a container client-container: 
STEP: delete the pod
Aug 22 00:27:07.240: INFO: Waiting for pod downwardapi-volume-8abd7af9-0ef3-4dc6-a742-18944473610a to disappear
Aug 22 00:27:07.244: INFO: Pod downwardapi-volume-8abd7af9-0ef3-4dc6-a742-18944473610a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:27:07.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4110" for this suite.

• [SLOW TEST:6.122 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4491,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:27:07.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 22 00:27:10.386: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:27:10.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1505" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4496,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:27:10.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 22 00:27:10.723: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6621af98-5c92-4c49-af4a-b221ecbe24d4" in namespace "projected-9665" to be "success or failure"
Aug 22 00:27:10.736: INFO: Pod "downwardapi-volume-6621af98-5c92-4c49-af4a-b221ecbe24d4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.370927ms
Aug 22 00:27:12.785: INFO: Pod "downwardapi-volume-6621af98-5c92-4c49-af4a-b221ecbe24d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062008782s
Aug 22 00:27:14.788: INFO: Pod "downwardapi-volume-6621af98-5c92-4c49-af4a-b221ecbe24d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065807502s
STEP: Saw pod success
Aug 22 00:27:14.789: INFO: Pod "downwardapi-volume-6621af98-5c92-4c49-af4a-b221ecbe24d4" satisfied condition "success or failure"
Aug 22 00:27:14.796: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6621af98-5c92-4c49-af4a-b221ecbe24d4 container client-container: 
STEP: delete the pod
Aug 22 00:27:14.839: INFO: Waiting for pod downwardapi-volume-6621af98-5c92-4c49-af4a-b221ecbe24d4 to disappear
Aug 22 00:27:14.898: INFO: Pod downwardapi-volume-6621af98-5c92-4c49-af4a-b221ecbe24d4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:27:14.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9665" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4536,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:27:14.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 22 00:27:14.991: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 22 00:27:19.994: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:27:20.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7390" for this suite.

• [SLOW TEST:5.198 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":276,"skipped":4552,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:27:20.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 22 00:27:26.763: INFO: Successfully updated pod "annotationupdate757c0c31-5265-4c0b-9e30-8507478755a2"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:27:28.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3193" for this suite.

• [SLOW TEST:8.667 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4558,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 22 00:27:28.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-332
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-332
STEP: Deleting pre-stop pod
Aug 22 00:27:41.952: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 22 00:27:41.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-332" for this suite.

• [SLOW TEST:13.190 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":278,"skipped":4561,"failed":0}
SSSSSAug 22 00:27:41.974: INFO: Running AfterSuite actions on all nodes
Aug 22 00:27:41.975: INFO: Running AfterSuite actions on node 1
Aug 22 00:27:41.975: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 4952.191 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS