I0223 23:38:47.591461 10 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0223 23:38:47.592371 10 e2e.go:109] Starting e2e run "d929b4ed-2b14-4dc9-b87a-0057b8abf1a4" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582501125 - Will randomize all specs Will run 280 of 4845 specs Feb 23 23:38:47.711: INFO: >>> kubeConfig: /root/.kube/config Feb 23 23:38:47.718: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 23 23:38:47.755: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 23 23:38:47.807: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 23 23:38:47.808: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 23 23:38:47.808: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 23 23:38:47.819: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 23 23:38:47.819: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 23 23:38:47.819: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Feb 23 23:38:47.822: INFO: kube-apiserver version: v1.17.0 Feb 23 23:38:47.822: INFO: >>> kubeConfig: /root/.kube/config Feb 23 23:38:47.829: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:38:47.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 23 23:38:47.961: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-48b21e8e-5529-408b-b164-4f86da3d54d4 STEP: Creating a pod to test consume configMaps Feb 23 23:38:47.983: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f776e47-9196-4257-a8df-0f471a814c7c" in namespace "projected-2563" to be "success or failure" Feb 23 23:38:47.987: INFO: Pod "pod-projected-configmaps-3f776e47-9196-4257-a8df-0f471a814c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.728004ms Feb 23 23:38:50.000: INFO: Pod "pod-projected-configmaps-3f776e47-9196-4257-a8df-0f471a814c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01713624s Feb 23 23:38:52.013: INFO: Pod "pod-projected-configmaps-3f776e47-9196-4257-a8df-0f471a814c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030196237s Feb 23 23:38:54.027: INFO: Pod "pod-projected-configmaps-3f776e47-9196-4257-a8df-0f471a814c7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043947241s STEP: Saw pod success Feb 23 23:38:54.027: INFO: Pod "pod-projected-configmaps-3f776e47-9196-4257-a8df-0f471a814c7c" satisfied condition "success or failure" Feb 23 23:38:54.032: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-3f776e47-9196-4257-a8df-0f471a814c7c container projected-configmap-volume-test: STEP: delete the pod Feb 23 23:38:54.106: INFO: Waiting for pod pod-projected-configmaps-3f776e47-9196-4257-a8df-0f471a814c7c to disappear Feb 23 23:38:54.122: INFO: Pod pod-projected-configmaps-3f776e47-9196-4257-a8df-0f471a814c7c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:38:54.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2563" for this suite. • [SLOW TEST:6.305 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":1,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:38:54.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5136.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5136.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5136.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5136.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5136.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5136.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 23 23:39:10.178: INFO: DNS probes using dns-5136/dns-test-4e1d0a82-ac0f-44e7-b69f-0ede84cd8b04 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:39:10.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5136" for this suite. • [SLOW TEST:16.121 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":2,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:39:10.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 23 23:39:10.393: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 23 23:39:15.459: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 23 23:39:19.482: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 23 23:39:19.517: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6901 /apis/apps/v1/namespaces/deployment-6901/deployments/test-cleanup-deployment 7541e0dc-cb38-4379-a53c-6a6b6b498325 10316499 1 2020-02-23 23:39:19 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00217eb98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Feb 23 23:39:19.527: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-6901 /apis/apps/v1/namespaces/deployment-6901/replicasets/test-cleanup-deployment-55ffc6b7b6 3d99177a-81d7-4713-ac57-2f8aa468ff3e 10316501 1 2020-02-23 23:39:19 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 7541e0dc-cb38-4379-a53c-6a6b6b498325 0xc002234a77 0xc002234a78}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002234ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 23 23:39:19.527: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 23 23:39:19.527: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6901 /apis/apps/v1/namespaces/deployment-6901/replicasets/test-cleanup-controller 7ea39638-e64d-40d5-92a5-d5bf452ab5bf 10316500 1 2020-02-23 23:39:10 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 7541e0dc-cb38-4379-a53c-6a6b6b498325 0xc0022349a7 0xc0022349a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002234a08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 23 23:39:19.564: INFO: Pod "test-cleanup-controller-wpq2f" is available: &Pod{ObjectMeta:{test-cleanup-controller-wpq2f test-cleanup-controller- deployment-6901 /api/v1/namespaces/deployment-6901/pods/test-cleanup-controller-wpq2f cd270abd-b8c7-4130-8a83-c61d8de02098 10316493 0 2020-02-23 23:39:10 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 7ea39638-e64d-40d5-92a5-d5bf452ab5bf 0xc002235027 0xc002235028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j8d8z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j8d8z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j8d8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-23 23:39:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-23 23:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-23 23:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-23 23:39:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-23 23:39:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-23 23:39:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f6c434ea7d27570fd6a652f12ee77b2d5caf5640309e4c37881e5d73849f91e0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 23 23:39:19.564: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-xvbls" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-xvbls test-cleanup-deployment-55ffc6b7b6- deployment-6901 /api/v1/namespaces/deployment-6901/pods/test-cleanup-deployment-55ffc6b7b6-xvbls 18073682-c4cd-4819-a233-25b5ee7824f1 10316505 0 2020-02-23 23:39:19 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 3d99177a-81d7-4713-ac57-2f8aa468ff3e 0xc0022351b7 0xc0022351b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j8d8z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j8d8z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j8d8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-23 23:39:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:39:19.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6901" for this suite. • [SLOW TEST:9.425 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":3,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:39:19.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Feb 23 23:39:34.448: INFO: Successfully updated pod "labelsupdatee10cf8d9-5c09-499f-9dc6-96136280082a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:39:36.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-662" for this suite. • [SLOW TEST:16.858 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":4,"skipped":93,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:39:36.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:39:36.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8710" for this suite. STEP: Destroying namespace "nspatchtest-f36fcdc3-c235-49b1-adc3-9c5922ea2e2c-2611" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":5,"skipped":109,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:39:37.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 23 23:39:37.328: INFO: Waiting up to 5m0s for pod "pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476" in namespace "emptydir-7781" to be "success or failure" Feb 23 23:39:37.344: INFO: Pod "pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476": Phase="Pending", Reason="", readiness=false. Elapsed: 15.675063ms Feb 23 23:39:39.351: INFO: Pod "pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022706033s Feb 23 23:39:41.358: INFO: Pod "pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029917078s Feb 23 23:39:43.372: INFO: Pod "pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044391175s Feb 23 23:39:45.379: INFO: Pod "pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051327849s Feb 23 23:39:47.386: INFO: Pod "pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057945588s STEP: Saw pod success Feb 23 23:39:47.386: INFO: Pod "pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476" satisfied condition "success or failure" Feb 23 23:39:47.389: INFO: Trying to get logs from node jerma-node pod pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476 container test-container: STEP: delete the pod Feb 23 23:39:47.564: INFO: Waiting for pod pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476 to disappear Feb 23 23:39:47.572: INFO: Pod pod-0fff464a-6fb2-4d7c-bec2-6e98e3701476 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:39:47.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7781" for this suite. • [SLOW TEST:10.548 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":6,"skipped":122,"failed":0} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:39:47.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 23 23:39:47.714: INFO: Created pod &Pod{ObjectMeta:{dns-5458 dns-5458 /api/v1/namespaces/dns-5458/pods/dns-5458 2b12ba78-ed55-46e7-8209-78603d41558c 10316661 0 2020-02-23 23:39:47 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cknwl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cknwl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cknwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 23 23:39:47.729: INFO: The status of Pod dns-5458 is Pending, waiting for it to be Running (with Ready = true) Feb 23 23:39:50.334: INFO: The status of Pod dns-5458 is Pending, waiting for it to be Running (with Ready = true) Feb 23 23:39:51.737: INFO: The status of Pod dns-5458 is Pending, waiting for it to be Running (with Ready = true) Feb 23 23:39:53.737: INFO: The status of Pod dns-5458 is Pending, waiting for it to be Running (with Ready = true) Feb 23 23:39:55.739: INFO: The status of Pod dns-5458 is Pending, waiting for it to be Running (with Ready = true) Feb 23 23:39:57.737: INFO: The status of Pod dns-5458 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Feb 23 23:39:57.737: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5458 PodName:dns-5458 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:39:57.737: INFO: >>> kubeConfig: /root/.kube/config I0223 23:39:57.822676 10 log.go:172] (0xc002b4bc30) (0xc002566000) Create stream I0223 23:39:57.822879 10 log.go:172] (0xc002b4bc30) (0xc002566000) Stream added, broadcasting: 1 I0223 23:39:57.828774 10 log.go:172] (0xc002b4bc30) Reply frame received for 1 I0223 23:39:57.828816 10 log.go:172] (0xc002b4bc30) (0xc0024d35e0) Create stream I0223 23:39:57.828828 10 log.go:172] (0xc002b4bc30) (0xc0024d35e0) Stream added, broadcasting: 3 I0223 23:39:57.830475 10 log.go:172] (0xc002b4bc30) Reply frame received for 3 I0223 23:39:57.830506 10 log.go:172] (0xc002b4bc30) (0xc0022bc000) Create stream I0223 23:39:57.830519 10 log.go:172] (0xc002b4bc30) (0xc0022bc000) Stream added, broadcasting: 5 I0223 23:39:57.832569 10 log.go:172] (0xc002b4bc30) Reply frame received for 5 I0223 23:39:57.955654 10 log.go:172] (0xc002b4bc30) Data frame received for 3 I0223 23:39:57.955748 10 log.go:172] (0xc0024d35e0) (3) Data frame handling I0223 23:39:57.955768 10 log.go:172] (0xc0024d35e0) (3) Data frame sent I0223 23:39:58.038437 10 log.go:172] (0xc002b4bc30) (0xc0024d35e0) Stream removed, broadcasting: 3 I0223 23:39:58.038793 10 log.go:172] (0xc002b4bc30) Data frame received for 1 I0223 23:39:58.038924 10 log.go:172] (0xc002566000) (1) Data frame handling I0223 23:39:58.038955 10 log.go:172] (0xc002566000) (1) Data frame sent I0223 23:39:58.038970 10 log.go:172] (0xc002b4bc30) (0xc0022bc000) Stream removed, broadcasting: 5 I0223 23:39:58.039017 10 log.go:172] (0xc002b4bc30) (0xc002566000) Stream removed, broadcasting: 1 I0223 23:39:58.039057 10 log.go:172] (0xc002b4bc30) Go away received I0223 23:39:58.039975 10 log.go:172] (0xc002b4bc30) (0xc002566000) Stream removed, broadcasting: 1 I0223 23:39:58.040015 10 log.go:172] (0xc002b4bc30) (0xc0024d35e0) Stream removed, broadcasting: 3 I0223 23:39:58.040025 10 log.go:172] (0xc002b4bc30) (0xc0022bc000) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Feb 23 23:39:58.040: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5458 PodName:dns-5458 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:39:58.040: INFO: >>> kubeConfig: /root/.kube/config I0223 23:39:58.084180 10 log.go:172] (0xc002a94840) (0xc00200c1e0) Create stream I0223 23:39:58.084301 10 log.go:172] (0xc002a94840) (0xc00200c1e0) Stream added, broadcasting: 1 I0223 23:39:58.088934 10 log.go:172] (0xc002a94840) Reply frame received for 1 I0223 23:39:58.089030 10 log.go:172] (0xc002a94840) (0xc0022bc0a0) Create stream I0223 23:39:58.089048 10 log.go:172] (0xc002a94840) (0xc0022bc0a0) Stream added, broadcasting: 3 I0223 23:39:58.090532 10 log.go:172] (0xc002a94840) Reply frame received for 3 I0223 23:39:58.090580 10 log.go:172] (0xc002a94840) (0xc0024d3720) Create stream I0223 23:39:58.090592 10 log.go:172] (0xc002a94840) (0xc0024d3720) Stream added, broadcasting: 5 I0223 23:39:58.091899 10 log.go:172] (0xc002a94840) Reply frame received for 5 I0223 23:39:58.169065 10 log.go:172] (0xc002a94840) Data frame received for 3 I0223 23:39:58.169246 10 log.go:172] (0xc0022bc0a0) (3) Data frame handling I0223 23:39:58.169270 10 log.go:172] (0xc0022bc0a0) (3) Data frame sent I0223 23:39:58.245208 10 log.go:172] (0xc002a94840) Data frame received for 1 I0223 23:39:58.245419 10 log.go:172] (0xc002a94840) (0xc0022bc0a0) Stream removed, broadcasting: 3 I0223 23:39:58.245518 10 log.go:172] (0xc00200c1e0) (1) Data frame handling I0223 23:39:58.245562 10 log.go:172] (0xc00200c1e0) (1) Data frame sent I0223 23:39:58.245644 10 log.go:172] (0xc002a94840) (0xc00200c1e0) Stream removed, broadcasting: 1 I0223 23:39:58.246294 10 log.go:172] (0xc002a94840) (0xc0024d3720) Stream removed, broadcasting: 5 I0223 23:39:58.246458 10 log.go:172] (0xc002a94840) Go away received I0223 23:39:58.246866 10 log.go:172] (0xc002a94840) (0xc00200c1e0) Stream removed, broadcasting: 1 I0223 23:39:58.246977 10 log.go:172] (0xc002a94840) (0xc0022bc0a0) Stream removed, broadcasting: 3 I0223 23:39:58.246992 10 log.go:172] (0xc002a94840) (0xc0024d3720) Stream removed, broadcasting: 5 Feb 23 23:39:58.247: INFO: Deleting pod dns-5458... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:39:58.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5458" for this suite. • [SLOW TEST:10.775 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":7,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:39:58.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 23 23:39:58.501: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13" in namespace "projected-560" to be "success or failure" Feb 23 23:39:58.546: INFO: Pod "downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13": Phase="Pending", Reason="", readiness=false. Elapsed: 45.051776ms Feb 23 23:40:00.576: INFO: Pod "downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075137365s Feb 23 23:40:02.585: INFO: Pod "downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084117022s Feb 23 23:40:04.597: INFO: Pod "downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095713661s Feb 23 23:40:06.610: INFO: Pod "downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10885533s STEP: Saw pod success Feb 23 23:40:06.610: INFO: Pod "downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13" satisfied condition "success or failure" Feb 23 23:40:06.618: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13 container client-container: STEP: delete the pod Feb 23 23:40:06.877: INFO: Waiting for pod downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13 to disappear Feb 23 23:40:06.915: INFO: Pod downwardapi-volume-1ce74f7f-c1a3-4a89-80fa-1bfac7be0b13 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:40:06.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-560" for this suite. • [SLOW TEST:8.567 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":8,"skipped":143,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:40:06.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:40:07.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6922" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":9,"skipped":160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:40:07.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:40:18.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-44" for this suite. • [SLOW TEST:11.332 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":10,"skipped":248,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:40:18.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 23 23:40:25.963: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:40:26.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-590" for this suite. • [SLOW TEST:7.369 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":11,"skipped":263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:40:26.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 23 23:40:26.201: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-94474a1e-b5e0-453d-8f11-16060005838f" in namespace "security-context-test-7657" to be "success or failure" Feb 23 23:40:26.212: INFO: Pod "busybox-privileged-false-94474a1e-b5e0-453d-8f11-16060005838f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.534064ms Feb 23 23:40:28.220: INFO: Pod "busybox-privileged-false-94474a1e-b5e0-453d-8f11-16060005838f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018802959s Feb 23 23:40:30.226: INFO: Pod "busybox-privileged-false-94474a1e-b5e0-453d-8f11-16060005838f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024977493s Feb 23 23:40:32.233: INFO: Pod "busybox-privileged-false-94474a1e-b5e0-453d-8f11-16060005838f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032085788s Feb 23 23:40:34.240: INFO: Pod "busybox-privileged-false-94474a1e-b5e0-453d-8f11-16060005838f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039358493s Feb 23 23:40:34.240: INFO: Pod "busybox-privileged-false-94474a1e-b5e0-453d-8f11-16060005838f" satisfied condition "success or failure" Feb 23 23:40:34.257: INFO: Got logs for pod "busybox-privileged-false-94474a1e-b5e0-453d-8f11-16060005838f": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:40:34.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7657" for this suite. • [SLOW TEST:8.245 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":12,"skipped":291,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:40:34.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-658a83a8-a17d-48d0-96ae-d732383ff6dc STEP: Creating a pod to test consume secrets Feb 23 23:40:34.599: INFO: Waiting up to 5m0s for pod "pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451" in namespace "secrets-943" to be "success or failure" Feb 23 23:40:34.612: INFO: Pod "pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451": Phase="Pending", Reason="", readiness=false. Elapsed: 13.074994ms Feb 23 23:40:36.627: INFO: Pod "pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02803579s Feb 23 23:40:38.632: INFO: Pod "pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033370769s Feb 23 23:40:40.639: INFO: Pod "pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039792769s Feb 23 23:40:42.649: INFO: Pod "pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049929071s STEP: Saw pod success Feb 23 23:40:42.649: INFO: Pod "pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451" satisfied condition "success or failure" Feb 23 23:40:42.652: INFO: Trying to get logs from node jerma-node pod pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451 container secret-volume-test: STEP: delete the pod Feb 23 23:40:42.832: INFO: Waiting for pod pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451 to disappear Feb 23 23:40:42.844: INFO: Pod pod-secrets-decabe63-a0ee-4002-974c-b48b4df7e451 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:40:42.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-943" for this suite. STEP: Destroying namespace "secret-namespace-8741" for this suite. • [SLOW TEST:8.601 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":13,"skipped":295,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:40:42.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 23 23:40:43.558: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 23 23:40:45.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 23:40:47.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 23:40:49.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098043, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 23 23:40:52.667: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:40:52.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3645" for this suite. STEP: Destroying namespace "webhook-3645-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.997 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":14,"skipped":295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:40:52.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 23 23:40:53.024: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 23 23:40:56.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3943 create -f -' Feb 23 23:40:58.715: INFO: stderr: "" Feb 23 23:40:58.715: INFO: stdout: "e2e-test-crd-publish-openapi-6196-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 23 23:40:58.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3943 delete e2e-test-crd-publish-openapi-6196-crds test-cr' Feb 23 23:40:58.825: INFO: stderr: "" Feb 23 23:40:58.825: INFO: stdout: "e2e-test-crd-publish-openapi-6196-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 23 23:40:58.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3943 apply -f -' Feb 23 23:40:59.093: INFO: stderr: "" Feb 23 23:40:59.094: INFO: stdout: "e2e-test-crd-publish-openapi-6196-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 23 23:40:59.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3943 delete e2e-test-crd-publish-openapi-6196-crds test-cr' Feb 23 23:40:59.215: INFO: stderr: "" Feb 23 23:40:59.216: INFO: stdout: "e2e-test-crd-publish-openapi-6196-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 23 23:40:59.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6196-crds' Feb 23 23:40:59.437: INFO: stderr: "" Feb 23 23:40:59.438: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6196-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:41:02.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3943" for this suite. • [SLOW TEST:9.743 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":15,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:41:02.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-downwardapi-rr4x STEP: Creating a pod to test atomic-volume-subpath Feb 23 23:41:02.796: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rr4x" in namespace "subpath-876" to be "success or failure" Feb 23 23:41:02.807: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Pending", Reason="", readiness=false. Elapsed: 11.102185ms Feb 23 23:41:04.823: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027158515s Feb 23 23:41:06.829: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033403186s Feb 23 23:41:08.836: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03994916s Feb 23 23:41:10.845: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Running", Reason="", readiness=true. Elapsed: 8.048718314s Feb 23 23:41:12.861: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Running", Reason="", readiness=true. Elapsed: 10.065141485s Feb 23 23:41:14.875: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Running", Reason="", readiness=true. Elapsed: 12.079124643s Feb 23 23:41:16.884: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Running", Reason="", readiness=true. Elapsed: 14.087531896s Feb 23 23:41:18.892: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Running", Reason="", readiness=true. Elapsed: 16.09575875s Feb 23 23:41:21.585: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Running", Reason="", readiness=true. Elapsed: 18.788525406s Feb 23 23:41:23.600: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Running", Reason="", readiness=true. Elapsed: 20.803993623s Feb 23 23:41:25.610: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Running", Reason="", readiness=true. Elapsed: 22.813606911s Feb 23 23:41:27.618: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Running", Reason="", readiness=true. Elapsed: 24.822468337s Feb 23 23:41:29.625: INFO: Pod "pod-subpath-test-downwardapi-rr4x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.829331628s STEP: Saw pod success Feb 23 23:41:29.626: INFO: Pod "pod-subpath-test-downwardapi-rr4x" satisfied condition "success or failure" Feb 23 23:41:29.632: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-rr4x container test-container-subpath-downwardapi-rr4x: STEP: delete the pod Feb 23 23:41:29.718: INFO: Waiting for pod pod-subpath-test-downwardapi-rr4x to disappear Feb 23 23:41:29.749: INFO: Pod pod-subpath-test-downwardapi-rr4x no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rr4x Feb 23 23:41:29.749: INFO: Deleting pod "pod-subpath-test-downwardapi-rr4x" in namespace "subpath-876" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:41:29.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-876" for this suite. • [SLOW TEST:27.145 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":16,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:41:29.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Feb 23 23:41:29.895: INFO: namespace kubectl-5337 Feb 23 23:41:29.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5337' Feb 23 23:41:30.283: INFO: stderr: "" Feb 23 23:41:30.283: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 23 23:41:31.292: INFO: Selector matched 1 pods for map[app:agnhost] Feb 23 23:41:31.292: INFO: Found 0 / 1 Feb 23 23:41:32.295: INFO: Selector matched 1 pods for map[app:agnhost] Feb 23 23:41:32.295: INFO: Found 0 / 1 Feb 23 23:41:33.290: INFO: Selector matched 1 pods for map[app:agnhost] Feb 23 23:41:33.290: INFO: Found 0 / 1 Feb 23 23:41:34.292: INFO: Selector matched 1 pods for map[app:agnhost] Feb 23 23:41:34.293: INFO: Found 0 / 1 Feb 23 23:41:35.292: INFO: Selector matched 1 pods for map[app:agnhost] Feb 23 23:41:35.292: INFO: Found 0 / 1 Feb 23 23:41:36.301: INFO: Selector matched 1 pods for map[app:agnhost] Feb 23 23:41:36.301: INFO: Found 1 / 1 Feb 23 23:41:36.301: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 23 23:41:36.325: INFO: Selector matched 1 pods for map[app:agnhost] Feb 23 23:41:36.325: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 23 23:41:36.325: INFO: wait on agnhost-master startup in kubectl-5337 Feb 23 23:41:36.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-82sq4 agnhost-master --namespace=kubectl-5337' Feb 23 23:41:36.540: INFO: stderr: "" Feb 23 23:41:36.541: INFO: stdout: "Paused\n" STEP: exposing RC Feb 23 23:41:36.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5337' Feb 23 23:41:36.709: INFO: stderr: "" Feb 23 23:41:36.709: INFO: stdout: "service/rm2 exposed\n" Feb 23 23:41:36.713: INFO: Service rm2 in namespace kubectl-5337 found. STEP: exposing service Feb 23 23:41:38.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5337' Feb 23 23:41:38.910: INFO: stderr: "" Feb 23 23:41:38.910: INFO: stdout: "service/rm3 exposed\n" Feb 23 23:41:38.939: INFO: Service rm3 in namespace kubectl-5337 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:41:41.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5337" for this suite. • [SLOW TEST:11.830 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":280,"completed":17,"skipped":419,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:41:41.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 23 23:42:03.910: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:03.911: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:03.957543 10 log.go:172] (0xc000ee2630) (0xc0024d3180) Create stream I0223 23:42:03.957642 10 log.go:172] (0xc000ee2630) (0xc0024d3180) Stream added, broadcasting: 1 I0223 23:42:03.961953 10 log.go:172] (0xc000ee2630) Reply frame received for 1 I0223 23:42:03.962006 10 log.go:172] (0xc000ee2630) (0xc002a84000) Create stream I0223 23:42:03.962016 10 log.go:172] (0xc000ee2630) (0xc002a84000) Stream added, broadcasting: 3 I0223 23:42:03.964535 10 log.go:172] (0xc000ee2630) Reply frame received for 3 I0223 23:42:03.964699 10 log.go:172] (0xc000ee2630) (0xc002a14140) Create stream I0223 23:42:03.964716 10 log.go:172] (0xc000ee2630) (0xc002a14140) Stream added, broadcasting: 5 I0223 23:42:03.968385 10 log.go:172] (0xc000ee2630) Reply frame received for 5 I0223 23:42:04.071759 10 log.go:172] (0xc000ee2630) Data frame received for 3 I0223 23:42:04.072234 10 log.go:172] (0xc002a84000) (3) Data frame handling I0223 23:42:04.072287 10 log.go:172] (0xc002a84000) (3) Data frame sent I0223 23:42:04.169714 10 log.go:172] (0xc000ee2630) (0xc002a84000) Stream removed, broadcasting: 3 I0223 23:42:04.170428 10 log.go:172] (0xc000ee2630) Data frame received for 1 I0223 23:42:04.170782 10 log.go:172] (0xc000ee2630) (0xc002a14140) Stream removed, broadcasting: 5 I0223 23:42:04.170909 10 log.go:172] (0xc0024d3180) (1) Data frame handling I0223 23:42:04.171010 10 log.go:172] (0xc0024d3180) (1) Data frame sent I0223 23:42:04.171037 10 log.go:172] (0xc000ee2630) (0xc0024d3180) Stream removed, broadcasting: 1 I0223 23:42:04.171063 10 log.go:172] (0xc000ee2630) Go away received I0223 23:42:04.171329 10 log.go:172] (0xc000ee2630) (0xc0024d3180) Stream removed, broadcasting: 1 I0223 23:42:04.171345 10 log.go:172] (0xc000ee2630) (0xc002a84000) Stream removed, broadcasting: 3 I0223 23:42:04.171350 10 log.go:172] (0xc000ee2630) (0xc002a14140) Stream removed, broadcasting: 5 Feb 23 23:42:04.171: INFO: Exec stderr: "" Feb 23 23:42:04.171: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:04.171: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:04.207881 10 log.go:172] (0xc002a94a50) (0xc002327e00) Create stream I0223 23:42:04.207998 10 log.go:172] (0xc002a94a50) (0xc002327e00) Stream added, broadcasting: 1 I0223 23:42:04.211497 10 log.go:172] (0xc002a94a50) Reply frame received for 1 I0223 23:42:04.211534 10 log.go:172] (0xc002a94a50) (0xc002996460) Create stream I0223 23:42:04.211544 10 log.go:172] (0xc002a94a50) (0xc002996460) Stream added, broadcasting: 3 I0223 23:42:04.212724 10 log.go:172] (0xc002a94a50) Reply frame received for 3 I0223 23:42:04.212806 10 log.go:172] (0xc002a94a50) (0xc002327ea0) Create stream I0223 23:42:04.212819 10 log.go:172] (0xc002a94a50) (0xc002327ea0) Stream added, broadcasting: 5 I0223 23:42:04.214378 10 log.go:172] (0xc002a94a50) Reply frame received for 5 I0223 23:42:04.274052 10 log.go:172] (0xc002a94a50) Data frame received for 3 I0223 23:42:04.274115 10 log.go:172] (0xc002996460) (3) Data frame handling I0223 23:42:04.274166 10 log.go:172] (0xc002996460) (3) Data frame sent I0223 23:42:04.353566 10 log.go:172] (0xc002a94a50) (0xc002996460) Stream removed, broadcasting: 3 I0223 23:42:04.353821 10 log.go:172] (0xc002a94a50) Data frame received for 1 I0223 23:42:04.353854 10 log.go:172] (0xc002327e00) (1) Data frame handling I0223 23:42:04.353910 10 log.go:172] (0xc002327e00) (1) Data frame sent I0223 23:42:04.353942 10 log.go:172] (0xc002a94a50) (0xc002327ea0) Stream removed, broadcasting: 5 I0223 23:42:04.354027 10 log.go:172] (0xc002a94a50) (0xc002327e00) Stream removed, broadcasting: 1 I0223 23:42:04.354087 10 log.go:172] (0xc002a94a50) Go away received I0223 23:42:04.354688 10 log.go:172] (0xc002a94a50) (0xc002327e00) Stream removed, broadcasting: 1 I0223 23:42:04.354895 10 log.go:172] (0xc002a94a50) (0xc002996460) Stream removed, broadcasting: 3 I0223 23:42:04.354978 10 log.go:172] (0xc002a94a50) (0xc002327ea0) Stream removed, broadcasting: 5 Feb 23 23:42:04.355: INFO: Exec stderr: "" Feb 23 23:42:04.355: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:04.355: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:04.455428 10 log.go:172] (0xc0026674a0) (0xc002a84280) Create stream I0223 23:42:04.455577 10 log.go:172] (0xc0026674a0) (0xc002a84280) Stream added, broadcasting: 1 I0223 23:42:04.461897 10 log.go:172] (0xc0026674a0) Reply frame received for 1 I0223 23:42:04.461945 10 log.go:172] (0xc0026674a0) (0xc0024d32c0) Create stream I0223 23:42:04.461954 10 log.go:172] (0xc0026674a0) (0xc0024d32c0) Stream added, broadcasting: 3 I0223 23:42:04.463484 10 log.go:172] (0xc0026674a0) Reply frame received for 3 I0223 23:42:04.463503 10 log.go:172] (0xc0026674a0) (0xc002a84320) Create stream I0223 23:42:04.463510 10 log.go:172] (0xc0026674a0) (0xc002a84320) Stream added, broadcasting: 5 I0223 23:42:04.465679 10 log.go:172] (0xc0026674a0) Reply frame received for 5 I0223 23:42:04.561634 10 log.go:172] (0xc0026674a0) Data frame received for 3 I0223 23:42:04.561817 10 log.go:172] (0xc0024d32c0) (3) Data frame handling I0223 23:42:04.561875 10 log.go:172] (0xc0024d32c0) (3) Data frame sent I0223 23:42:04.666883 10 log.go:172] (0xc0026674a0) (0xc0024d32c0) Stream removed, broadcasting: 3 I0223 23:42:04.667029 10 log.go:172] (0xc0026674a0) Data frame received for 1 I0223 23:42:04.667042 10 log.go:172] (0xc002a84280) (1) Data frame handling I0223 23:42:04.667113 10 log.go:172] (0xc002a84280) (1) Data frame sent I0223 23:42:04.667138 10 log.go:172] (0xc0026674a0) (0xc002a84280) Stream removed, broadcasting: 1 I0223 23:42:04.667322 10 log.go:172] (0xc0026674a0) (0xc002a84320) Stream removed, broadcasting: 5 I0223 23:42:04.667336 10 log.go:172] (0xc0026674a0) Go away received I0223 23:42:04.667802 10 log.go:172] (0xc0026674a0) (0xc002a84280) Stream removed, broadcasting: 1 I0223 23:42:04.667825 10 log.go:172] (0xc0026674a0) (0xc0024d32c0) Stream removed, broadcasting: 3 I0223 23:42:04.667831 10 log.go:172] (0xc0026674a0) (0xc002a84320) Stream removed, broadcasting: 5 Feb 23 23:42:04.667: INFO: Exec stderr: "" Feb 23 23:42:04.668: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:04.668: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:04.717711 10 log.go:172] (0xc000ee2c60) (0xc0024d3540) Create stream I0223 23:42:04.718037 10 log.go:172] (0xc000ee2c60) (0xc0024d3540) Stream added, broadcasting: 1 I0223 23:42:04.723083 10 log.go:172] (0xc000ee2c60) Reply frame received for 1 I0223 23:42:04.723182 10 log.go:172] (0xc000ee2c60) (0xc0024d35e0) Create stream I0223 23:42:04.723193 10 log.go:172] (0xc000ee2c60) (0xc0024d35e0) Stream added, broadcasting: 3 I0223 23:42:04.725012 10 log.go:172] (0xc000ee2c60) Reply frame received for 3 I0223 23:42:04.725037 10 log.go:172] (0xc000ee2c60) (0xc002996500) Create stream I0223 23:42:04.725046 10 log.go:172] (0xc000ee2c60) (0xc002996500) Stream added, broadcasting: 5 I0223 23:42:04.728446 10 log.go:172] (0xc000ee2c60) Reply frame received for 5 I0223 23:42:04.804420 10 log.go:172] (0xc000ee2c60) Data frame received for 3 I0223 23:42:04.804524 10 log.go:172] (0xc0024d35e0) (3) Data frame handling I0223 23:42:04.804552 10 log.go:172] (0xc0024d35e0) (3) Data frame sent I0223 23:42:04.902904 10 log.go:172] (0xc000ee2c60) (0xc0024d35e0) Stream removed, broadcasting: 3 I0223 23:42:04.903027 10 log.go:172] (0xc000ee2c60) Data frame received for 1 I0223 23:42:04.903048 10 log.go:172] (0xc0024d3540) (1) Data frame handling I0223 23:42:04.903065 10 log.go:172] (0xc0024d3540) (1) Data frame sent I0223 23:42:04.903080 10 log.go:172] (0xc000ee2c60) (0xc002996500) Stream removed, broadcasting: 5 I0223 23:42:04.903119 10 log.go:172] (0xc000ee2c60) (0xc0024d3540) Stream removed, broadcasting: 1 I0223 23:42:04.903131 10 log.go:172] (0xc000ee2c60) Go away received I0223 23:42:04.903373 10 log.go:172] (0xc000ee2c60) (0xc0024d3540) Stream removed, broadcasting: 1 I0223 23:42:04.903382 10 log.go:172] (0xc000ee2c60) (0xc0024d35e0) Stream removed, broadcasting: 3 I0223 23:42:04.903391 10 log.go:172] (0xc000ee2c60) (0xc002996500) Stream removed, broadcasting: 5 Feb 23 23:42:04.903: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 23 23:42:04.903: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:04.903: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:04.944195 10 log.go:172] (0xc002a95080) (0xc002c121e0) Create stream I0223 23:42:04.944293 10 log.go:172] (0xc002a95080) (0xc002c121e0) Stream added, broadcasting: 1 I0223 23:42:04.966493 10 log.go:172] (0xc002a95080) Reply frame received for 1 I0223 23:42:04.966649 10 log.go:172] (0xc002a95080) (0xc002a141e0) Create stream I0223 23:42:04.966679 10 log.go:172] (0xc002a95080) (0xc002a141e0) Stream added, broadcasting: 3 I0223 23:42:04.968804 10 log.go:172] (0xc002a95080) Reply frame received for 3 I0223 23:42:04.968832 10 log.go:172] (0xc002a95080) (0xc002996640) Create stream I0223 23:42:04.968867 10 log.go:172] (0xc002a95080) (0xc002996640) Stream added, broadcasting: 5 I0223 23:42:04.970388 10 log.go:172] (0xc002a95080) Reply frame received for 5 I0223 23:42:05.058879 10 log.go:172] (0xc002a95080) Data frame received for 3 I0223 23:42:05.059295 10 log.go:172] (0xc002a141e0) (3) Data frame handling I0223 23:42:05.059377 10 log.go:172] (0xc002a141e0) (3) Data frame sent I0223 23:42:05.157939 10 log.go:172] (0xc002a95080) (0xc002a141e0) Stream removed, broadcasting: 3 I0223 23:42:05.158254 10 log.go:172] (0xc002a95080) Data frame received for 1 I0223 23:42:05.158302 10 log.go:172] (0xc002c121e0) (1) Data frame handling I0223 23:42:05.158329 10 log.go:172] (0xc002c121e0) (1) Data frame sent I0223 23:42:05.158344 10 log.go:172] (0xc002a95080) (0xc002c121e0) Stream removed, broadcasting: 1 I0223 23:42:05.158911 10 log.go:172] (0xc002a95080) (0xc002996640) Stream removed, broadcasting: 5 I0223 23:42:05.159073 10 log.go:172] (0xc002a95080) Go away received I0223 23:42:05.159405 10 log.go:172] (0xc002a95080) (0xc002c121e0) Stream removed, broadcasting: 1 I0223 23:42:05.159420 10 log.go:172] (0xc002a95080) (0xc002a141e0) Stream removed, broadcasting: 3 I0223 23:42:05.159427 10 log.go:172] (0xc002a95080) (0xc002996640) Stream removed, broadcasting: 5 Feb 23 23:42:05.159: INFO: Exec stderr: "" Feb 23 23:42:05.159: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:05.159: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:05.224369 10 log.go:172] (0xc002a95600) (0xc002c123c0) Create stream I0223 23:42:05.224584 10 log.go:172] (0xc002a95600) (0xc002c123c0) Stream added, broadcasting: 1 I0223 23:42:05.228666 10 log.go:172] (0xc002a95600) Reply frame received for 1 I0223 23:42:05.228711 10 log.go:172] (0xc002a95600) (0xc0024d3720) Create stream I0223 23:42:05.228720 10 log.go:172] (0xc002a95600) (0xc0024d3720) Stream added, broadcasting: 3 I0223 23:42:05.230785 10 log.go:172] (0xc002a95600) Reply frame received for 3 I0223 23:42:05.230811 10 log.go:172] (0xc002a95600) (0xc002a843c0) Create stream I0223 23:42:05.230823 10 log.go:172] (0xc002a95600) (0xc002a843c0) Stream added, broadcasting: 5 I0223 23:42:05.232378 10 log.go:172] (0xc002a95600) Reply frame received for 5 I0223 23:42:05.307882 10 log.go:172] (0xc002a95600) Data frame received for 3 I0223 23:42:05.307965 10 log.go:172] (0xc0024d3720) (3) Data frame handling I0223 23:42:05.307996 10 log.go:172] (0xc0024d3720) (3) Data frame sent I0223 23:42:05.375218 10 log.go:172] (0xc002a95600) (0xc0024d3720) Stream removed, broadcasting: 3 I0223 23:42:05.375371 10 log.go:172] (0xc002a95600) Data frame received for 1 I0223 23:42:05.375397 10 log.go:172] (0xc002c123c0) (1) Data frame handling I0223 23:42:05.375414 10 log.go:172] (0xc002c123c0) (1) Data frame sent I0223 23:42:05.375444 10 log.go:172] (0xc002a95600) (0xc002a843c0) Stream removed, broadcasting: 5 I0223 23:42:05.375473 10 log.go:172] (0xc002a95600) (0xc002c123c0) Stream removed, broadcasting: 1 I0223 23:42:05.375483 10 log.go:172] (0xc002a95600) Go away received I0223 23:42:05.375739 10 log.go:172] (0xc002a95600) (0xc002c123c0) Stream removed, broadcasting: 1 I0223 23:42:05.375767 10 log.go:172] (0xc002a95600) (0xc0024d3720) Stream removed, broadcasting: 3 I0223 23:42:05.375778 10 log.go:172] (0xc002a95600) (0xc002a843c0) Stream removed, broadcasting: 5 Feb 23 23:42:05.375: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 23 23:42:05.376: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:05.376: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:05.411138 10 log.go:172] (0xc002b4be40) (0xc0029968c0) Create stream I0223 23:42:05.411202 10 log.go:172] (0xc002b4be40) (0xc0029968c0) Stream added, broadcasting: 1 I0223 23:42:05.413220 10 log.go:172] (0xc002b4be40) Reply frame received for 1 I0223 23:42:05.413241 10 log.go:172] (0xc002b4be40) (0xc002c12460) Create stream I0223 23:42:05.413246 10 log.go:172] (0xc002b4be40) (0xc002c12460) Stream added, broadcasting: 3 I0223 23:42:05.414017 10 log.go:172] (0xc002b4be40) Reply frame received for 3 I0223 23:42:05.414037 10 log.go:172] (0xc002b4be40) (0xc002a84460) Create stream I0223 23:42:05.414042 10 log.go:172] (0xc002b4be40) (0xc002a84460) Stream added, broadcasting: 5 I0223 23:42:05.414978 10 log.go:172] (0xc002b4be40) Reply frame received for 5 I0223 23:42:05.472938 10 log.go:172] (0xc002b4be40) Data frame received for 3 I0223 23:42:05.473002 10 log.go:172] (0xc002c12460) (3) Data frame handling I0223 23:42:05.473028 10 log.go:172] (0xc002c12460) (3) Data frame sent I0223 23:42:05.549132 10 log.go:172] (0xc002b4be40) (0xc002c12460) Stream removed, broadcasting: 3 I0223 23:42:05.549491 10 log.go:172] (0xc002b4be40) Data frame received for 1 I0223 23:42:05.549507 10 log.go:172] (0xc0029968c0) (1) Data frame handling I0223 23:42:05.549535 10 log.go:172] (0xc0029968c0) (1) Data frame sent I0223 23:42:05.549547 10 log.go:172] (0xc002b4be40) (0xc0029968c0) Stream removed, broadcasting: 1 I0223 23:42:05.549915 10 log.go:172] (0xc002b4be40) (0xc002a84460) Stream removed, broadcasting: 5 I0223 23:42:05.549948 10 log.go:172] (0xc002b4be40) (0xc0029968c0) Stream removed, broadcasting: 1 I0223 23:42:05.549958 10 log.go:172] (0xc002b4be40) (0xc002c12460) Stream removed, broadcasting: 3 I0223 23:42:05.549964 10 log.go:172] (0xc002b4be40) (0xc002a84460) Stream removed, broadcasting: 5 Feb 23 23:42:05.550: INFO: Exec stderr: "" I0223 23:42:05.551025 10 log.go:172] (0xc002b4be40) Go away received Feb 23 23:42:05.551: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:05.551: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:05.592667 10 log.go:172] (0xc0015cc630) (0xc002996a00) Create stream I0223 23:42:05.592770 10 log.go:172] (0xc0015cc630) (0xc002996a00) Stream added, broadcasting: 1 I0223 23:42:05.595654 10 log.go:172] (0xc0015cc630) Reply frame received for 1 I0223 23:42:05.595680 10 log.go:172] (0xc0015cc630) (0xc002c12640) Create stream I0223 23:42:05.595688 10 log.go:172] (0xc0015cc630) (0xc002c12640) Stream added, broadcasting: 3 I0223 23:42:05.598610 10 log.go:172] (0xc0015cc630) Reply frame received for 3 I0223 23:42:05.598763 10 log.go:172] (0xc0015cc630) (0xc0024d37c0) Create stream I0223 23:42:05.598779 10 log.go:172] (0xc0015cc630) (0xc0024d37c0) Stream added, broadcasting: 5 I0223 23:42:05.600929 10 log.go:172] (0xc0015cc630) Reply frame received for 5 I0223 23:42:05.670106 10 log.go:172] (0xc0015cc630) Data frame received for 3 I0223 23:42:05.670191 10 log.go:172] (0xc002c12640) (3) Data frame handling I0223 23:42:05.670216 10 log.go:172] (0xc002c12640) (3) Data frame sent I0223 23:42:05.742740 10 log.go:172] (0xc0015cc630) Data frame received for 1 I0223 23:42:05.742857 10 log.go:172] (0xc0015cc630) (0xc002c12640) Stream removed, broadcasting: 3 I0223 23:42:05.742914 10 log.go:172] (0xc002996a00) (1) Data frame handling I0223 23:42:05.742929 10 log.go:172] (0xc002996a00) (1) Data frame sent I0223 23:42:05.742937 10 log.go:172] (0xc0015cc630) (0xc002996a00) Stream removed, broadcasting: 1 I0223 23:42:05.743475 10 log.go:172] (0xc0015cc630) (0xc0024d37c0) Stream removed, broadcasting: 5 I0223 23:42:05.743556 10 log.go:172] (0xc0015cc630) Go away received I0223 23:42:05.743670 10 log.go:172] (0xc0015cc630) (0xc002996a00) Stream removed, broadcasting: 1 I0223 23:42:05.743767 10 log.go:172] (0xc0015cc630) (0xc002c12640) Stream removed, broadcasting: 3 I0223 23:42:05.743802 10 log.go:172] (0xc0015cc630) (0xc0024d37c0) Stream removed, broadcasting: 5 Feb 23 23:42:05.743: INFO: Exec stderr: "" Feb 23 23:42:05.744: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:05.744: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:05.793272 10 log.go:172] (0xc000ee3340) (0xc0024d39a0) Create stream I0223 23:42:05.793585 10 log.go:172] (0xc000ee3340) (0xc0024d39a0) Stream added, broadcasting: 1 I0223 23:42:05.796846 10 log.go:172] (0xc000ee3340) Reply frame received for 1 I0223 23:42:05.796899 10 log.go:172] (0xc000ee3340) (0xc002a84500) Create stream I0223 23:42:05.796912 10 log.go:172] (0xc000ee3340) (0xc002a84500) Stream added, broadcasting: 3 I0223 23:42:05.798122 10 log.go:172] (0xc000ee3340) Reply frame received for 3 I0223 23:42:05.798153 10 log.go:172] (0xc000ee3340) (0xc002a14280) Create stream I0223 23:42:05.798164 10 log.go:172] (0xc000ee3340) (0xc002a14280) Stream added, broadcasting: 5 I0223 23:42:05.799624 10 log.go:172] (0xc000ee3340) Reply frame received for 5 I0223 23:42:05.885557 10 log.go:172] (0xc000ee3340) Data frame received for 3 I0223 23:42:05.885651 10 log.go:172] (0xc002a84500) (3) Data frame handling I0223 23:42:05.885685 10 log.go:172] (0xc002a84500) (3) Data frame sent I0223 23:42:05.966378 10 log.go:172] (0xc000ee3340) Data frame received for 1 I0223 23:42:05.966584 10 log.go:172] (0xc0024d39a0) (1) Data frame handling I0223 23:42:05.966607 10 log.go:172] (0xc0024d39a0) (1) Data frame sent I0223 23:42:05.967032 10 log.go:172] (0xc000ee3340) (0xc0024d39a0) Stream removed, broadcasting: 1 I0223 23:42:05.967157 10 log.go:172] (0xc000ee3340) (0xc002a14280) Stream removed, broadcasting: 5 I0223 23:42:05.967231 10 log.go:172] (0xc000ee3340) (0xc002a84500) Stream removed, broadcasting: 3 I0223 23:42:05.967276 10 log.go:172] (0xc000ee3340) (0xc0024d39a0) Stream removed, broadcasting: 1 I0223 23:42:05.967287 10 log.go:172] (0xc000ee3340) (0xc002a84500) Stream removed, broadcasting: 3 I0223 23:42:05.967297 10 log.go:172] (0xc000ee3340) (0xc002a14280) Stream removed, broadcasting: 5 I0223 23:42:05.967752 10 log.go:172] (0xc000ee3340) Go away received Feb 23 23:42:05.967: INFO: Exec stderr: "" Feb 23 23:42:05.968: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3616 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:42:05.968: INFO: >>> kubeConfig: /root/.kube/config I0223 23:42:06.010454 10 log.go:172] (0xc002667a20) (0xc002a84640) Create stream I0223 23:42:06.010623 10 log.go:172] (0xc002667a20) (0xc002a84640) Stream added, broadcasting: 1 I0223 23:42:06.015773 10 log.go:172] (0xc002667a20) Reply frame received for 1 I0223 23:42:06.015835 10 log.go:172] (0xc002667a20) (0xc002996aa0) Create stream I0223 23:42:06.015843 10 log.go:172] (0xc002667a20) (0xc002996aa0) Stream added, broadcasting: 3 I0223 23:42:06.017529 10 log.go:172] (0xc002667a20) Reply frame received for 3 I0223 23:42:06.017708 10 log.go:172] (0xc002667a20) (0xc0024d3ae0) Create stream I0223 23:42:06.017751 10 log.go:172] (0xc002667a20) (0xc0024d3ae0) Stream added, broadcasting: 5 I0223 23:42:06.019565 10 log.go:172] (0xc002667a20) Reply frame received for 5 I0223 23:42:06.123274 10 log.go:172] (0xc002667a20) Data frame received for 3 I0223 23:42:06.123346 10 log.go:172] (0xc002996aa0) (3) Data frame handling I0223 23:42:06.123371 10 log.go:172] (0xc002996aa0) (3) Data frame sent I0223 23:42:06.207812 10 log.go:172] (0xc002667a20) Data frame received for 1 I0223 23:42:06.208038 10 log.go:172] (0xc002667a20) (0xc002996aa0) Stream removed, broadcasting: 3 I0223 23:42:06.208115 10 log.go:172] (0xc002a84640) (1) Data frame handling I0223 23:42:06.208149 10 log.go:172] (0xc002a84640) (1) Data frame sent I0223 23:42:06.208201 10 log.go:172] (0xc002667a20) (0xc0024d3ae0) Stream removed, broadcasting: 5 I0223 23:42:06.208265 10 log.go:172] (0xc002667a20) (0xc002a84640) Stream removed, broadcasting: 1 I0223 23:42:06.208282 10 log.go:172] (0xc002667a20) Go away received I0223 23:42:06.208676 10 log.go:172] (0xc002667a20) (0xc002a84640) Stream removed, broadcasting: 1 I0223 23:42:06.208706 10 log.go:172] (0xc002667a20) (0xc002996aa0) Stream removed, broadcasting: 3 I0223 23:42:06.208717 10 log.go:172] (0xc002667a20) (0xc0024d3ae0) Stream removed, broadcasting: 5 Feb 23 23:42:06.208: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:42:06.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3616" for this suite. • [SLOW TEST:24.625 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":18,"skipped":434,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:42:06.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Feb 23 23:42:16.988: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9360 pod-service-account-19c302b6-ea97-407a-9ef0-901ad34cf10b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 23 23:42:17.449: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9360 pod-service-account-19c302b6-ea97-407a-9ef0-901ad34cf10b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 23 23:42:17.822: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9360 pod-service-account-19c302b6-ea97-407a-9ef0-901ad34cf10b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:42:18.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9360" for this suite. • [SLOW TEST:11.946 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":280,"completed":19,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:42:18.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Feb 23 23:42:26.851: INFO: Successfully updated pod "labelsupdatee43d2aed-5ee4-4930-9ec0-b7e00660d26b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:42:29.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6946" for this suite. • [SLOW TEST:11.008 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":20,"skipped":475,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:42:29.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 23 23:42:29.292: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7b1181d1-3c5c-4959-9254-070b68de3362" in namespace "security-context-test-2321" to be "success or failure" Feb 23 23:42:29.331: INFO: Pod "alpine-nnp-false-7b1181d1-3c5c-4959-9254-070b68de3362": Phase="Pending", Reason="", readiness=false. Elapsed: 38.774653ms Feb 23 23:42:31.339: INFO: Pod "alpine-nnp-false-7b1181d1-3c5c-4959-9254-070b68de3362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0464942s Feb 23 23:42:33.345: INFO: Pod "alpine-nnp-false-7b1181d1-3c5c-4959-9254-070b68de3362": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053250005s Feb 23 23:42:35.656: INFO: Pod "alpine-nnp-false-7b1181d1-3c5c-4959-9254-070b68de3362": Phase="Pending", Reason="", readiness=false. Elapsed: 6.363888102s Feb 23 23:42:37.743: INFO: Pod "alpine-nnp-false-7b1181d1-3c5c-4959-9254-070b68de3362": Phase="Pending", Reason="", readiness=false. Elapsed: 8.451169961s Feb 23 23:42:39.776: INFO: Pod "alpine-nnp-false-7b1181d1-3c5c-4959-9254-070b68de3362": Phase="Pending", Reason="", readiness=false. Elapsed: 10.483728163s Feb 23 23:42:41.796: INFO: Pod "alpine-nnp-false-7b1181d1-3c5c-4959-9254-070b68de3362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.503599933s Feb 23 23:42:41.796: INFO: Pod "alpine-nnp-false-7b1181d1-3c5c-4959-9254-070b68de3362" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:42:41.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2321" for this suite. • [SLOW TEST:12.703 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":21,"skipped":486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:42:41.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating replication controller my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b Feb 23 23:42:42.060: INFO: Pod name my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b: Found 0 pods out of 1 Feb 23 23:42:48.574: INFO: Pod name my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b: Found 1 pods out of 1 Feb 23 23:42:48.574: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b" are running Feb 23 23:42:52.602: INFO: Pod "my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b-5vt6p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 23:42:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 23:42:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 23:42:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-23 23:42:42 +0000 UTC Reason: Message:}]) Feb 23 23:42:52.602: INFO: Trying to dial the pod Feb 23 23:42:57.623: INFO: Controller my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b: Got expected result from replica 1 [my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b-5vt6p]: "my-hostname-basic-4d866c3a-a455-4c0c-84f6-0d864c5dcd8b-5vt6p", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:42:57.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3365" for this suite. • [SLOW TEST:15.765 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":280,"completed":22,"skipped":549,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:42:57.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 23 23:42:57.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2796' Feb 23 23:42:57.981: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 23 23:42:57.982: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Feb 23 23:42:58.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-2796' Feb 23 23:42:58.151: INFO: stderr: "" Feb 23 23:42:58.151: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:42:58.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2796" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":280,"completed":23,"skipped":567,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:42:58.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:43:15.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3797" for this suite. • [SLOW TEST:17.211 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":24,"skipped":574,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:43:15.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:43:23.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3410" for this suite. • [SLOW TEST:8.198 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":25,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:43:23.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 23 23:43:31.786: INFO: &Pod{ObjectMeta:{send-events-23128ec1-88c5-44e3-a911-27d4d7e849e7 events-9701 /api/v1/namespaces/events-9701/pods/send-events-23128ec1-88c5-44e3-a911-27d4d7e849e7 5b1a619b-e0a8-4a3e-be32-3337167e5e45 10317735 0 2020-02-23 23:43:23 +0000 UTC map[name:foo time:708555076] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghgsm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghgsm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghgsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-23 23:43:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-23 23:43:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-23 23:43:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-23 23:43:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-23 23:43:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-23 23:43:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://bec3596d3221296c0e6fb1596522e1bca8e3f3ae58f50ceecdb10eaba4ee1228,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Feb 23 23:43:33.799: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 23 23:43:35.809: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:43:35.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9701" for this suite. • [SLOW TEST:12.301 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":280,"completed":26,"skipped":655,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:43:35.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Feb 23 23:43:35.986: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:43:52.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7206" for this suite. • [SLOW TEST:17.025 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":27,"skipped":663,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:43:52.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 23 23:44:03.091: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7822 PodName:pod-sharedvolume-5d256700-513f-4883-b528-59235f76a49a ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 23 23:44:03.091: INFO: >>> kubeConfig: /root/.kube/config I0223 23:44:03.122771 10 log.go:172] (0xc004ebe2c0) (0xc002326c80) Create stream I0223 23:44:03.122845 10 log.go:172] (0xc004ebe2c0) (0xc002326c80) Stream added, broadcasting: 1 I0223 23:44:03.126021 10 log.go:172] (0xc004ebe2c0) Reply frame received for 1 I0223 23:44:03.126055 10 log.go:172] (0xc004ebe2c0) (0xc0024d39a0) Create stream I0223 23:44:03.126063 10 log.go:172] (0xc004ebe2c0) (0xc0024d39a0) Stream added, broadcasting: 3 I0223 23:44:03.127099 10 log.go:172] (0xc004ebe2c0) Reply frame received for 3 I0223 23:44:03.127123 10 log.go:172] (0xc004ebe2c0) (0xc002500140) Create stream I0223 23:44:03.127135 10 log.go:172] (0xc004ebe2c0) (0xc002500140) Stream added, broadcasting: 5 I0223 23:44:03.128600 10 log.go:172] (0xc004ebe2c0) Reply frame received for 5 I0223 23:44:03.204242 10 log.go:172] (0xc004ebe2c0) Data frame received for 3 I0223 23:44:03.204334 10 log.go:172] (0xc0024d39a0) (3) Data frame handling I0223 23:44:03.204359 10 log.go:172] (0xc0024d39a0) (3) Data frame sent I0223 23:44:03.259974 10 log.go:172] (0xc004ebe2c0) (0xc0024d39a0) Stream removed, broadcasting: 3 I0223 23:44:03.260227 10 log.go:172] (0xc004ebe2c0) Data frame received for 1 I0223 23:44:03.260353 10 log.go:172] (0xc004ebe2c0) (0xc002500140) Stream removed, broadcasting: 5 I0223 23:44:03.260430 10 log.go:172] (0xc002326c80) (1) Data frame handling I0223 23:44:03.260448 10 log.go:172] (0xc002326c80) (1) Data frame sent I0223 23:44:03.260467 10 log.go:172] (0xc004ebe2c0) (0xc002326c80) Stream removed, broadcasting: 1 I0223 23:44:03.260482 10 log.go:172] (0xc004ebe2c0) Go away received I0223 23:44:03.260907 10 log.go:172] (0xc004ebe2c0) (0xc002326c80) Stream removed, broadcasting: 1 I0223 23:44:03.260927 10 log.go:172] (0xc004ebe2c0) (0xc0024d39a0) Stream removed, broadcasting: 3 I0223 23:44:03.260944 10 log.go:172] (0xc004ebe2c0) (0xc002500140) Stream removed, broadcasting: 5 Feb 23 23:44:03.260: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:44:03.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7822" for this suite. • [SLOW TEST:10.367 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":28,"skipped":671,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:44:03.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service nodeport-test with type=NodePort in namespace services-7660 STEP: creating replication controller nodeport-test in namespace services-7660 I0223 23:44:03.491989 10 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-7660, replica count: 2 I0223 23:44:06.544238 10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 23:44:09.545170 10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 23:44:12.546400 10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 23:44:15.547068 10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0223 23:44:18.547900 10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 23 23:44:18.548: INFO: Creating new exec pod Feb 23 23:44:27.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpodstjqj -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Feb 23 23:44:28.059: INFO: stderr: "I0223 23:44:27.834861 308 log.go:172] (0xc0009c09a0) (0xc0009cc000) Create stream\nI0223 23:44:27.835005 308 log.go:172] (0xc0009c09a0) (0xc0009cc000) Stream added, broadcasting: 1\nI0223 23:44:27.839004 308 log.go:172] (0xc0009c09a0) Reply frame received for 1\nI0223 23:44:27.839089 308 log.go:172] (0xc0009c09a0) (0xc00093c000) Create stream\nI0223 23:44:27.839109 308 log.go:172] (0xc0009c09a0) (0xc00093c000) Stream added, broadcasting: 3\nI0223 23:44:27.842297 308 log.go:172] (0xc0009c09a0) Reply frame received for 3\nI0223 23:44:27.842384 308 log.go:172] (0xc0009c09a0) (0xc0005bbb80) Create stream\nI0223 23:44:27.842402 308 log.go:172] (0xc0009c09a0) (0xc0005bbb80) Stream added, broadcasting: 5\nI0223 23:44:27.844687 308 log.go:172] (0xc0009c09a0) Reply frame received for 5\nI0223 23:44:27.946239 308 log.go:172] (0xc0009c09a0) Data frame received for 5\nI0223 23:44:27.946313 308 log.go:172] (0xc0005bbb80) (5) Data frame handling\nI0223 23:44:27.946341 308 log.go:172] (0xc0005bbb80) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0223 23:44:27.952819 308 log.go:172] (0xc0009c09a0) Data frame received for 5\nI0223 23:44:27.952842 308 log.go:172] (0xc0005bbb80) (5) Data frame handling\nI0223 23:44:27.952865 308 log.go:172] (0xc0005bbb80) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0223 23:44:28.049611 308 log.go:172] (0xc0009c09a0) Data frame received for 1\nI0223 23:44:28.049720 308 log.go:172] (0xc0009c09a0) (0xc0005bbb80) Stream removed, broadcasting: 5\nI0223 23:44:28.049765 308 log.go:172] (0xc0009cc000) (1) Data frame handling\nI0223 23:44:28.049782 308 log.go:172] (0xc0009cc000) (1) Data frame sent\nI0223 23:44:28.049911 308 log.go:172] (0xc0009c09a0) (0xc00093c000) Stream removed, broadcasting: 3\nI0223 23:44:28.049999 308 log.go:172] (0xc0009c09a0) (0xc0009cc000) Stream removed, broadcasting: 1\nI0223 23:44:28.050037 308 log.go:172] (0xc0009c09a0) Go away received\nI0223 23:44:28.050780 308 log.go:172] (0xc0009c09a0) (0xc0009cc000) Stream removed, broadcasting: 1\nI0223 23:44:28.050798 308 log.go:172] (0xc0009c09a0) (0xc00093c000) Stream removed, broadcasting: 3\nI0223 23:44:28.050805 308 log.go:172] (0xc0009c09a0) (0xc0005bbb80) Stream removed, broadcasting: 5\n" Feb 23 23:44:28.059: INFO: stdout: "" Feb 23 23:44:28.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpodstjqj -- /bin/sh -x -c nc -zv -t -w 2 10.96.224.88 80' Feb 23 23:44:28.362: INFO: stderr: "I0223 23:44:28.186692 329 log.go:172] (0xc000542790) (0xc000500000) Create stream\nI0223 23:44:28.186799 329 log.go:172] (0xc000542790) (0xc000500000) Stream added, broadcasting: 1\nI0223 23:44:28.193254 329 log.go:172] (0xc000542790) Reply frame received for 1\nI0223 23:44:28.193339 329 log.go:172] (0xc000542790) (0xc0005efc20) Create stream\nI0223 23:44:28.193352 329 log.go:172] (0xc000542790) (0xc0005efc20) Stream added, broadcasting: 3\nI0223 23:44:28.195918 329 log.go:172] (0xc000542790) Reply frame received for 3\nI0223 23:44:28.195980 329 log.go:172] (0xc000542790) (0xc0005efe00) Create stream\nI0223 23:44:28.196007 329 log.go:172] (0xc000542790) (0xc0005efe00) Stream added, broadcasting: 5\nI0223 23:44:28.202324 329 log.go:172] (0xc000542790) Reply frame received for 5\nI0223 23:44:28.270740 329 log.go:172] (0xc000542790) Data frame received for 5\nI0223 23:44:28.270776 329 log.go:172] (0xc0005efe00) (5) Data frame handling\nI0223 23:44:28.270841 329 log.go:172] (0xc0005efe00) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.224.88 80\nI0223 23:44:28.271807 329 log.go:172] (0xc000542790) Data frame received for 5\nI0223 23:44:28.271860 329 log.go:172] (0xc0005efe00) (5) Data frame handling\nI0223 23:44:28.271885 329 log.go:172] (0xc0005efe00) (5) Data frame sent\nConnection to 10.96.224.88 80 port [tcp/http] succeeded!\nI0223 23:44:28.352130 329 log.go:172] (0xc000542790) Data frame received for 1\nI0223 23:44:28.352228 329 log.go:172] (0xc000542790) (0xc0005efc20) Stream removed, broadcasting: 3\nI0223 23:44:28.352300 329 log.go:172] (0xc000500000) (1) Data frame handling\nI0223 23:44:28.352317 329 log.go:172] (0xc000500000) (1) Data frame sent\nI0223 23:44:28.352327 329 log.go:172] (0xc000542790) (0xc0005efe00) Stream removed, broadcasting: 5\nI0223 23:44:28.352361 329 log.go:172] (0xc000542790) (0xc000500000) Stream removed, broadcasting: 1\nI0223 23:44:28.352417 329 log.go:172] (0xc000542790) Go away received\nI0223 23:44:28.353258 329 log.go:172] (0xc000542790) (0xc000500000) Stream removed, broadcasting: 1\nI0223 23:44:28.353273 329 log.go:172] (0xc000542790) (0xc0005efc20) Stream removed, broadcasting: 3\nI0223 23:44:28.353279 329 log.go:172] (0xc000542790) (0xc0005efe00) Stream removed, broadcasting: 5\n" Feb 23 23:44:28.363: INFO: stdout: "" Feb 23 23:44:28.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpodstjqj -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30918' Feb 23 23:44:28.752: INFO: stderr: "I0223 23:44:28.505885 349 log.go:172] (0xc000ba36b0) (0xc00091c000) Create stream\nI0223 23:44:28.506054 349 log.go:172] (0xc000ba36b0) (0xc00091c000) Stream added, broadcasting: 1\nI0223 23:44:28.509211 349 log.go:172] (0xc000ba36b0) Reply frame received for 1\nI0223 23:44:28.509257 349 log.go:172] (0xc000ba36b0) (0xc0009ec140) Create stream\nI0223 23:44:28.509264 349 log.go:172] (0xc000ba36b0) (0xc0009ec140) Stream added, broadcasting: 3\nI0223 23:44:28.510437 349 log.go:172] (0xc000ba36b0) Reply frame received for 3\nI0223 23:44:28.510464 349 log.go:172] (0xc000ba36b0) (0xc0009e7540) Create stream\nI0223 23:44:28.510477 349 log.go:172] (0xc000ba36b0) (0xc0009e7540) Stream added, broadcasting: 5\nI0223 23:44:28.511695 349 log.go:172] (0xc000ba36b0) Reply frame received for 5\nI0223 23:44:28.628388 349 log.go:172] (0xc000ba36b0) Data frame received for 5\nI0223 23:44:28.628549 349 log.go:172] (0xc0009e7540) (5) Data frame handling\nI0223 23:44:28.628648 349 log.go:172] (0xc0009e7540) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30918\nI0223 23:44:28.636210 349 log.go:172] (0xc000ba36b0) Data frame received for 5\nI0223 23:44:28.636245 349 log.go:172] (0xc0009e7540) (5) Data frame handling\nI0223 23:44:28.636262 349 log.go:172] (0xc0009e7540) (5) Data frame sent\nConnection to 10.96.2.250 30918 port [tcp/30918] succeeded!\nI0223 23:44:28.736920 349 log.go:172] (0xc000ba36b0) (0xc0009ec140) Stream removed, broadcasting: 3\nI0223 23:44:28.737204 349 log.go:172] (0xc000ba36b0) Data frame received for 1\nI0223 23:44:28.737216 349 log.go:172] (0xc00091c000) (1) Data frame handling\nI0223 23:44:28.737231 349 log.go:172] (0xc00091c000) (1) Data frame sent\nI0223 23:44:28.737246 349 log.go:172] (0xc000ba36b0) (0xc00091c000) Stream removed, broadcasting: 1\nI0223 23:44:28.737861 349 log.go:172] (0xc000ba36b0) (0xc0009e7540) Stream removed, broadcasting: 5\nI0223 23:44:28.737921 349 log.go:172] (0xc000ba36b0) (0xc00091c000) Stream removed, broadcasting: 1\nI0223 23:44:28.737932 349 log.go:172] (0xc000ba36b0) (0xc0009ec140) Stream removed, broadcasting: 3\nI0223 23:44:28.737939 349 log.go:172] (0xc000ba36b0) (0xc0009e7540) Stream removed, broadcasting: 5\nI0223 23:44:28.738151 349 log.go:172] (0xc000ba36b0) Go away received\n" Feb 23 23:44:28.752: INFO: stdout: "" Feb 23 23:44:28.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpodstjqj -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30918' Feb 23 23:44:29.035: INFO: stderr: "I0223 23:44:28.878912 367 log.go:172] (0xc000ae5a20) (0xc000c12820) Create stream\nI0223 23:44:28.878975 367 log.go:172] (0xc000ae5a20) (0xc000c12820) Stream added, broadcasting: 1\nI0223 23:44:28.883885 367 log.go:172] (0xc000ae5a20) Reply frame received for 1\nI0223 23:44:28.883918 367 log.go:172] (0xc000ae5a20) (0xc00056c820) Create stream\nI0223 23:44:28.883925 367 log.go:172] (0xc000ae5a20) (0xc00056c820) Stream added, broadcasting: 3\nI0223 23:44:28.885211 367 log.go:172] (0xc000ae5a20) Reply frame received for 3\nI0223 23:44:28.885236 367 log.go:172] (0xc000ae5a20) (0xc0006934a0) Create stream\nI0223 23:44:28.885241 367 log.go:172] (0xc000ae5a20) (0xc0006934a0) Stream added, broadcasting: 5\nI0223 23:44:28.886381 367 log.go:172] (0xc000ae5a20) Reply frame received for 5\nI0223 23:44:28.954105 367 log.go:172] (0xc000ae5a20) Data frame received for 5\nI0223 23:44:28.954182 367 log.go:172] (0xc0006934a0) (5) Data frame handling\nI0223 23:44:28.954211 367 log.go:172] (0xc0006934a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30918\nI0223 23:44:28.958421 367 log.go:172] (0xc000ae5a20) Data frame received for 5\nI0223 23:44:28.958446 367 log.go:172] (0xc0006934a0) (5) Data frame handling\nI0223 23:44:28.958455 367 log.go:172] (0xc0006934a0) (5) Data frame sent\nConnection to 10.96.1.234 30918 port [tcp/30918] succeeded!\nI0223 23:44:29.022987 367 log.go:172] (0xc000ae5a20) (0xc0006934a0) Stream removed, broadcasting: 5\nI0223 23:44:29.023231 367 log.go:172] (0xc000ae5a20) Data frame received for 1\nI0223 23:44:29.023244 367 log.go:172] (0xc000c12820) (1) Data frame handling\nI0223 23:44:29.023255 367 log.go:172] (0xc000c12820) (1) Data frame sent\nI0223 23:44:29.023274 367 log.go:172] (0xc000ae5a20) (0xc000c12820) Stream removed, broadcasting: 1\nI0223 23:44:29.023642 367 log.go:172] (0xc000ae5a20) (0xc00056c820) Stream removed, broadcasting: 3\nI0223 23:44:29.023704 367 log.go:172] (0xc000ae5a20) Go away received\nI0223 23:44:29.024025 367 log.go:172] (0xc000ae5a20) (0xc000c12820) Stream removed, broadcasting: 1\nI0223 23:44:29.024040 367 log.go:172] (0xc000ae5a20) (0xc00056c820) Stream removed, broadcasting: 3\nI0223 23:44:29.024048 367 log.go:172] (0xc000ae5a20) (0xc0006934a0) Stream removed, broadcasting: 5\n" Feb 23 23:44:29.035: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:44:29.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7660" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.805 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":29,"skipped":675,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:44:29.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test env composition Feb 23 23:44:29.189: INFO: Waiting up to 5m0s for pod "var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966" in namespace "var-expansion-5711" to be "success or failure" Feb 23 23:44:29.195: INFO: Pod "var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966": Phase="Pending", Reason="", readiness=false. Elapsed: 5.958366ms Feb 23 23:44:31.200: INFO: Pod "var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011179915s Feb 23 23:44:33.207: INFO: Pod "var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017663475s Feb 23 23:44:35.351: INFO: Pod "var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16234737s Feb 23 23:44:37.360: INFO: Pod "var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.17125237s STEP: Saw pod success Feb 23 23:44:37.361: INFO: Pod "var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966" satisfied condition "success or failure" Feb 23 23:44:37.367: INFO: Trying to get logs from node jerma-node pod var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966 container dapi-container: STEP: delete the pod Feb 23 23:44:38.842: INFO: Waiting for pod var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966 to disappear Feb 23 23:44:38.855: INFO: Pod var-expansion-cc5a2d60-7995-4c9d-8dc3-01c6ed0bf966 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:44:38.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5711" for this suite. • [SLOW TEST:10.307 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":30,"skipped":680,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:44:39.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 23 23:44:41.292: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 23 23:44:45.102: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 23:44:47.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 23:44:49.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 23 23:44:51.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098283, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 23 23:44:54.142: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 23 23:44:54.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7862-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:44:54.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3629" for this suite. STEP: Destroying namespace "webhook-3629-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.699 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":31,"skipped":683,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:44:55.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 23 23:45:01.400: INFO: 10 pods remaining Feb 23 23:45:01.400: INFO: 10 pods has nil DeletionTimestamp Feb 23 23:45:01.400: INFO: Feb 23 23:45:02.748: INFO: 4 pods remaining Feb 23 23:45:02.749: INFO: 0 pods has nil DeletionTimestamp Feb 23 23:45:02.749: INFO: STEP: Gathering metrics W0223 23:45:03.270176 10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 23 23:45:03.270: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:45:03.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8776" for this suite. • [SLOW TEST:8.202 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":32,"skipped":684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:45:03.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:45:03.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7178" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":280,"completed":33,"skipped":709,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:45:03.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Feb 23 23:45:04.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8857' Feb 23 23:45:04.906: INFO: stderr: "" Feb 23 23:45:04.906: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 23 23:45:04.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8857' Feb 23 23:45:05.185: INFO: stderr: "" Feb 23 23:45:05.185: INFO: stdout: "update-demo-nautilus-k99fx update-demo-nautilus-mvjg4 " Feb 23 23:45:05.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k99fx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:05.344: INFO: stderr: "" Feb 23 23:45:05.344: INFO: stdout: "" Feb 23 23:45:05.344: INFO: update-demo-nautilus-k99fx is created but not running Feb 23 23:45:10.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8857' Feb 23 23:45:10.813: INFO: stderr: "" Feb 23 23:45:10.813: INFO: stdout: "update-demo-nautilus-k99fx update-demo-nautilus-mvjg4 " Feb 23 23:45:10.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k99fx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:11.407: INFO: stderr: "" Feb 23 23:45:11.407: INFO: stdout: "" Feb 23 23:45:11.407: INFO: update-demo-nautilus-k99fx is created but not running Feb 23 23:45:16.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8857' Feb 23 23:45:16.776: INFO: stderr: "" Feb 23 23:45:16.777: INFO: stdout: "update-demo-nautilus-k99fx update-demo-nautilus-mvjg4 " Feb 23 23:45:16.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k99fx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:18.975: INFO: stderr: "" Feb 23 23:45:18.975: INFO: stdout: "" Feb 23 23:45:18.975: INFO: update-demo-nautilus-k99fx is created but not running Feb 23 23:45:23.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8857' Feb 23 23:45:24.220: INFO: stderr: "" Feb 23 23:45:24.220: INFO: stdout: "update-demo-nautilus-k99fx update-demo-nautilus-mvjg4 " Feb 23 23:45:24.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k99fx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:24.860: INFO: stderr: "" Feb 23 23:45:24.861: INFO: stdout: "true" Feb 23 23:45:24.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k99fx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:25.015: INFO: stderr: "" Feb 23 23:45:25.015: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 23 23:45:25.015: INFO: validating pod update-demo-nautilus-k99fx Feb 23 23:45:25.053: INFO: got data: { "image": "nautilus.jpg" } Feb 23 23:45:25.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 23 23:45:25.054: INFO: update-demo-nautilus-k99fx is verified up and running Feb 23 23:45:25.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvjg4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:25.170: INFO: stderr: "" Feb 23 23:45:25.170: INFO: stdout: "" Feb 23 23:45:25.170: INFO: update-demo-nautilus-mvjg4 is created but not running Feb 23 23:45:30.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8857' Feb 23 23:45:30.313: INFO: stderr: "" Feb 23 23:45:30.314: INFO: stdout: "update-demo-nautilus-k99fx update-demo-nautilus-mvjg4 " Feb 23 23:45:30.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k99fx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:30.423: INFO: stderr: "" Feb 23 23:45:30.423: INFO: stdout: "true" Feb 23 23:45:30.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k99fx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:30.532: INFO: stderr: "" Feb 23 23:45:30.532: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 23 23:45:30.532: INFO: validating pod update-demo-nautilus-k99fx Feb 23 23:45:30.539: INFO: got data: { "image": "nautilus.jpg" } Feb 23 23:45:30.539: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 23 23:45:30.539: INFO: update-demo-nautilus-k99fx is verified up and running Feb 23 23:45:30.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvjg4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:30.629: INFO: stderr: "" Feb 23 23:45:30.629: INFO: stdout: "true" Feb 23 23:45:30.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvjg4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8857' Feb 23 23:45:30.707: INFO: stderr: "" Feb 23 23:45:30.707: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 23 23:45:30.707: INFO: validating pod update-demo-nautilus-mvjg4 Feb 23 23:45:30.717: INFO: got data: { "image": "nautilus.jpg" } Feb 23 23:45:30.717: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 23 23:45:30.717: INFO: update-demo-nautilus-mvjg4 is verified up and running STEP: using delete to clean up resources Feb 23 23:45:30.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8857' Feb 23 23:45:30.807: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 23 23:45:30.807: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 23 23:45:30.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8857' Feb 23 23:45:30.901: INFO: stderr: "No resources found in kubectl-8857 namespace.\n" Feb 23 23:45:30.901: INFO: stdout: "" Feb 23 23:45:30.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8857 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 23 23:45:30.983: INFO: stderr: "" Feb 23 23:45:30.983: INFO: stdout: "update-demo-nautilus-k99fx\nupdate-demo-nautilus-mvjg4\n" Feb 23 23:45:31.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8857' Feb 23 23:45:32.783: INFO: stderr: "No resources found in kubectl-8857 namespace.\n" Feb 23 23:45:32.783: INFO: stdout: "" Feb 23 23:45:32.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8857 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 23 23:45:32.924: INFO: stderr: "" Feb 23 23:45:32.924: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:45:32.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8857" for this suite. • [SLOW TEST:29.049 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":280,"completed":34,"skipped":714,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:45:32.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 23 23:45:33.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 23 23:45:33.278: INFO: stderr: "" Feb 23 23:45:33.278: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:45:33.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5454" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":280,"completed":35,"skipped":726,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:45:33.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 23 23:45:34.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909" in namespace "projected-7867" to be "success or failure" Feb 23 23:45:34.427: INFO: Pod "downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909": Phase="Pending", Reason="", readiness=false. Elapsed: 4.730033ms Feb 23 23:45:36.435: INFO: Pod "downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012635204s Feb 23 23:45:38.507: INFO: Pod "downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084353364s Feb 23 23:45:40.516: INFO: Pod "downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09338656s Feb 23 23:45:42.529: INFO: Pod "downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106006613s STEP: Saw pod success Feb 23 23:45:42.529: INFO: Pod "downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909" satisfied condition "success or failure" Feb 23 23:45:42.535: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909 container client-container: STEP: delete the pod Feb 23 23:45:42.597: INFO: Waiting for pod downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909 to disappear Feb 23 23:45:42.611: INFO: Pod downwardapi-volume-13950fc6-b61e-4c91-8a94-b73bfea66909 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:45:42.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7867" for this suite. • [SLOW TEST:9.333 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":36,"skipped":730,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:45:42.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 23 23:45:42.787: INFO: Waiting up to 5m0s for pod "pod-2617cf5e-c0d9-4509-9c43-0764d61729f3" in namespace "emptydir-6100" to be "success or failure" Feb 23 23:45:42.913: INFO: Pod "pod-2617cf5e-c0d9-4509-9c43-0764d61729f3": Phase="Pending", Reason="", readiness=false. Elapsed: 126.210767ms Feb 23 23:45:44.919: INFO: Pod "pod-2617cf5e-c0d9-4509-9c43-0764d61729f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131607876s Feb 23 23:45:46.926: INFO: Pod "pod-2617cf5e-c0d9-4509-9c43-0764d61729f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138480834s Feb 23 23:45:48.935: INFO: Pod "pod-2617cf5e-c0d9-4509-9c43-0764d61729f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148202657s Feb 23 23:45:50.946: INFO: Pod "pod-2617cf5e-c0d9-4509-9c43-0764d61729f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158720766s STEP: Saw pod success Feb 23 23:45:50.946: INFO: Pod "pod-2617cf5e-c0d9-4509-9c43-0764d61729f3" satisfied condition "success or failure" Feb 23 23:45:50.950: INFO: Trying to get logs from node jerma-node pod pod-2617cf5e-c0d9-4509-9c43-0764d61729f3 container test-container: STEP: delete the pod Feb 23 23:45:51.000: INFO: Waiting for pod pod-2617cf5e-c0d9-4509-9c43-0764d61729f3 to disappear Feb 23 23:45:51.012: INFO: Pod pod-2617cf5e-c0d9-4509-9c43-0764d61729f3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:45:51.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6100" for this suite. • [SLOW TEST:8.403 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":37,"skipped":733,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:45:51.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 23 23:46:07.314: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 23 23:46:07.325: INFO: Pod pod-with-poststart-http-hook still exists Feb 23 23:46:09.325: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 23 23:46:09.332: INFO: Pod pod-with-poststart-http-hook still exists Feb 23 23:46:11.325: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 23 23:46:11.334: INFO: Pod pod-with-poststart-http-hook still exists Feb 23 23:46:13.325: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 23 23:46:13.332: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:46:13.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9883" for this suite. • [SLOW TEST:22.321 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":38,"skipped":786,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:46:13.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 23 23:46:13.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab" in namespace "downward-api-6934" to be "success or failure" Feb 23 23:46:13.477: INFO: Pod "downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab": Phase="Pending", Reason="", readiness=false. Elapsed: 46.568524ms Feb 23 23:46:15.486: INFO: Pod "downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05555955s Feb 23 23:46:17.495: INFO: Pod "downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063884297s Feb 23 23:46:19.506: INFO: Pod "downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074889672s Feb 23 23:46:21.558: INFO: Pod "downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126986803s STEP: Saw pod success Feb 23 23:46:21.558: INFO: Pod "downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab" satisfied condition "success or failure" Feb 23 23:46:21.569: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab container client-container: STEP: delete the pod Feb 23 23:46:21.638: INFO: Waiting for pod downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab to disappear Feb 23 23:46:21.709: INFO: Pod downwardapi-volume-c714d601-1066-488a-b4ca-1952d6ee50ab no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:46:21.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6934" for this suite. • [SLOW TEST:8.374 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":39,"skipped":795,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:46:21.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-c84a1a45-7fef-4f53-9bd1-871f29bc31a9 STEP: Creating a pod to test consume secrets Feb 23 23:46:21.925: INFO: Waiting up to 5m0s for pod "pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a" in namespace "secrets-8630" to be "success or failure" Feb 23 23:46:21.972: INFO: Pod "pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.836183ms Feb 23 23:46:23.980: INFO: Pod "pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054798285s Feb 23 23:46:26.062: INFO: Pod "pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136725339s Feb 23 23:46:28.069: INFO: Pod "pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14393843s Feb 23 23:46:30.076: INFO: Pod "pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150908189s Feb 23 23:46:32.081: INFO: Pod "pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.155114412s STEP: Saw pod success Feb 23 23:46:32.081: INFO: Pod "pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a" satisfied condition "success or failure" Feb 23 23:46:32.083: INFO: Trying to get logs from node jerma-node pod pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a container secret-volume-test: STEP: delete the pod Feb 23 23:46:32.183: INFO: Waiting for pod pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a to disappear Feb 23 23:46:32.201: INFO: Pod pod-secrets-8b956e6c-2bf0-4bbe-9dc5-503441ec9c7a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:46:32.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8630" for this suite. • [SLOW TEST:10.487 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":40,"skipped":796,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:46:32.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 23 23:46:32.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635" in namespace "downward-api-9517" to be "success or failure" Feb 23 23:46:32.410: INFO: Pod "downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635": Phase="Pending", Reason="", readiness=false. Elapsed: 49.543387ms Feb 23 23:46:34.418: INFO: Pod "downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056833901s Feb 23 23:46:36.425: INFO: Pod "downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063637318s Feb 23 23:46:39.104: INFO: Pod "downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635": Phase="Pending", Reason="", readiness=false. Elapsed: 6.743478005s Feb 23 23:46:41.113: INFO: Pod "downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.751934576s STEP: Saw pod success Feb 23 23:46:41.113: INFO: Pod "downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635" satisfied condition "success or failure" Feb 23 23:46:41.118: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635 container client-container: STEP: delete the pod Feb 23 23:46:41.167: INFO: Waiting for pod downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635 to disappear Feb 23 23:46:41.172: INFO: Pod downwardapi-volume-ddd6de52-64db-44e3-97ac-2a15da2f8635 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:46:41.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9517" for this suite. • [SLOW TEST:9.012 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":41,"skipped":806,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:46:41.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating server pod server in namespace prestop-3709 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3709 STEP: Deleting pre-stop pod Feb 23 23:47:02.447: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:47:02.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3709" for this suite. • [SLOW TEST:21.323 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":280,"completed":42,"skipped":847,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:47:02.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 23 23:47:02.695: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8590 /api/v1/namespaces/watch-8590/configmaps/e2e-watch-test-watch-closed b3c1ab65-7488-459a-be91-34b53328d737 10318793 0 2020-02-23 23:47:02 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 23 23:47:02.696: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8590 /api/v1/namespaces/watch-8590/configmaps/e2e-watch-test-watch-closed b3c1ab65-7488-459a-be91-34b53328d737 10318794 0 2020-02-23 23:47:02 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 23 23:47:02.711: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8590 /api/v1/namespaces/watch-8590/configmaps/e2e-watch-test-watch-closed b3c1ab65-7488-459a-be91-34b53328d737 10318795 0 2020-02-23 23:47:02 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 23 23:47:02.712: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8590 /api/v1/namespaces/watch-8590/configmaps/e2e-watch-test-watch-closed b3c1ab65-7488-459a-be91-34b53328d737 10318796 0 2020-02-23 23:47:02 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:47:02.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8590" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":43,"skipped":851,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:47:02.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-6248f27a-2c15-4033-bb25-b88f4857f029 STEP: Creating a pod to test consume secrets Feb 23 23:47:02.909: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340" in namespace "projected-1436" to be "success or failure" Feb 23 23:47:02.912: INFO: Pod "pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340": Phase="Pending", Reason="", readiness=false. Elapsed: 3.542041ms Feb 23 23:47:04.919: INFO: Pod "pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010477631s Feb 23 23:47:06.932: INFO: Pod "pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023630009s Feb 23 23:47:08.941: INFO: Pod "pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032361362s Feb 23 23:47:10.948: INFO: Pod "pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038847606s Feb 23 23:47:12.956: INFO: Pod "pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.047208617s STEP: Saw pod success Feb 23 23:47:12.956: INFO: Pod "pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340" satisfied condition "success or failure" Feb 23 23:47:12.961: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340 container projected-secret-volume-test: STEP: delete the pod Feb 23 23:47:13.018: INFO: Waiting for pod pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340 to disappear Feb 23 23:47:13.059: INFO: Pod pod-projected-secrets-7ba1203c-a1ae-4234-9e04-be82a0674340 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:47:13.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1436" for this suite. • [SLOW TEST:10.350 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":44,"skipped":855,"failed":0} S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:47:13.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1924, will wait for the garbage collector to delete the pods Feb 23 23:47:23.314: INFO: Deleting Job.batch foo took: 8.854561ms Feb 23 23:47:23.715: INFO: Terminating Job.batch foo pods took: 401.033073ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:48:12.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1924" for this suite. • [SLOW TEST:59.767 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":45,"skipped":856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:48:12.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:48:17.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8846" for this suite. • [SLOW TEST:5.172 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":46,"skipped":893,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:48:18.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1266 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1266;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1266 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1266;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1266.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1266.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1266.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1266.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1266.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1266.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1266.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1266.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1266.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1266.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1266.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 79.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.79_udp@PTR;check="$$(dig +tcp +noall +answer +search 79.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.79_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1266 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1266;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1266 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1266;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1266.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1266.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1266.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1266.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1266.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1266.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1266.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1266.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1266.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1266.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1266.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1266.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 79.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.79_udp@PTR;check="$$(dig +tcp +noall +answer +search 79.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.79_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 23 23:48:30.216: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.225: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.232: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.241: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.250: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.257: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.264: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.276: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.320: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.325: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.329: INFO: Unable to read jessie_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.332: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.335: INFO: Unable to read jessie_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.338: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.341: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.344: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:30.363: INFO: Lookups using dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1266 wheezy_tcp@dns-test-service.dns-1266 wheezy_udp@dns-test-service.dns-1266.svc wheezy_tcp@dns-test-service.dns-1266.svc wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1266 jessie_tcp@dns-test-service.dns-1266 jessie_udp@dns-test-service.dns-1266.svc jessie_tcp@dns-test-service.dns-1266.svc jessie_udp@_http._tcp.dns-test-service.dns-1266.svc jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc] Feb 23 23:48:35.370: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.374: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.381: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.384: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.387: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.391: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.394: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.422: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.425: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.428: INFO: Unable to read jessie_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.432: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.435: INFO: Unable to read jessie_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.440: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.444: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.454: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:35.485: INFO: Lookups using dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1266 wheezy_tcp@dns-test-service.dns-1266 wheezy_udp@dns-test-service.dns-1266.svc wheezy_tcp@dns-test-service.dns-1266.svc wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1266 jessie_tcp@dns-test-service.dns-1266 jessie_udp@dns-test-service.dns-1266.svc jessie_tcp@dns-test-service.dns-1266.svc jessie_udp@_http._tcp.dns-test-service.dns-1266.svc jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc] Feb 23 23:48:41.150: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.165: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.200: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.222: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.225: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.228: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.230: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.233: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.487: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.494: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.500: INFO: Unable to read jessie_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.504: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.510: INFO: Unable to read jessie_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.516: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.520: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.525: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:41.556: INFO: Lookups using dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1266 wheezy_tcp@dns-test-service.dns-1266 wheezy_udp@dns-test-service.dns-1266.svc wheezy_tcp@dns-test-service.dns-1266.svc wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1266 jessie_tcp@dns-test-service.dns-1266 jessie_udp@dns-test-service.dns-1266.svc jessie_tcp@dns-test-service.dns-1266.svc jessie_udp@_http._tcp.dns-test-service.dns-1266.svc jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc] Feb 23 23:48:45.372: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.378: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.382: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.387: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.391: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.394: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.397: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.400: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.421: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.424: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.427: INFO: Unable to read jessie_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.430: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.433: INFO: Unable to read jessie_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.436: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.438: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.441: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:45.458: INFO: Lookups using dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1266 wheezy_tcp@dns-test-service.dns-1266 wheezy_udp@dns-test-service.dns-1266.svc wheezy_tcp@dns-test-service.dns-1266.svc wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1266 jessie_tcp@dns-test-service.dns-1266 jessie_udp@dns-test-service.dns-1266.svc jessie_tcp@dns-test-service.dns-1266.svc jessie_udp@_http._tcp.dns-test-service.dns-1266.svc jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc] Feb 23 23:48:50.389: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.397: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.401: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.405: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.410: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.415: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.419: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.423: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.475: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.479: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.484: INFO: Unable to read jessie_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.536: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.543: INFO: Unable to read jessie_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.551: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.563: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.572: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:50.629: INFO: Lookups using dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1266 wheezy_tcp@dns-test-service.dns-1266 wheezy_udp@dns-test-service.dns-1266.svc wheezy_tcp@dns-test-service.dns-1266.svc wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1266 jessie_tcp@dns-test-service.dns-1266 jessie_udp@dns-test-service.dns-1266.svc jessie_tcp@dns-test-service.dns-1266.svc jessie_udp@_http._tcp.dns-test-service.dns-1266.svc jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc] Feb 23 23:48:55.374: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.379: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.383: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.388: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.393: INFO: Unable to read wheezy_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.397: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.401: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.405: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.482: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.486: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.490: INFO: Unable to read jessie_udp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.496: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266 from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.501: INFO: Unable to read jessie_udp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.525: INFO: Unable to read jessie_tcp@dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.529: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.533: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc from pod dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab: the server could not find the requested resource (get pods dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab) Feb 23 23:48:55.575: INFO: Lookups using dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1266 wheezy_tcp@dns-test-service.dns-1266 wheezy_udp@dns-test-service.dns-1266.svc wheezy_tcp@dns-test-service.dns-1266.svc wheezy_udp@_http._tcp.dns-test-service.dns-1266.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1266.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1266 jessie_tcp@dns-test-service.dns-1266 jessie_udp@dns-test-service.dns-1266.svc jessie_tcp@dns-test-service.dns-1266.svc jessie_udp@_http._tcp.dns-test-service.dns-1266.svc jessie_tcp@_http._tcp.dns-test-service.dns-1266.svc] Feb 23 23:49:00.597: INFO: DNS probes using dns-1266/dns-test-065fb2dc-4332-447a-807b-9cdfd5342eab succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:49:01.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1266" for this suite. • [SLOW TEST:43.368 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":47,"skipped":900,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:49:01.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 STEP: creating an pod Feb 23 23:49:01.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8514 -- logs-generator --log-lines-total 100 --run-duration 20s' Feb 23 23:49:01.892: INFO: stderr: "" Feb 23 23:49:01.892: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Feb 23 23:49:01.892: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Feb 23 23:49:01.893: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8514" to be "running and ready, or succeeded" Feb 23 23:49:01.906: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 13.557927ms Feb 23 23:49:03.919: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0262185s Feb 23 23:49:05.924: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031194687s Feb 23 23:49:07.932: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038973513s Feb 23 23:49:09.938: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.045572523s Feb 23 23:49:09.938: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Feb 23 23:49:09.938: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Feb 23 23:49:09.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8514' Feb 23 23:49:10.109: INFO: stderr: "" Feb 23 23:49:10.109: INFO: stdout: "I0223 23:49:08.607274 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/c5l 572\nI0223 23:49:08.807488 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/pcq2 405\nI0223 23:49:09.007712 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/xtp 543\nI0223 23:49:09.207653 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/skw 408\nI0223 23:49:09.407633 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/58mb 395\nI0223 23:49:09.607604 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/vx2d 467\nI0223 23:49:09.807638 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/56t9 504\nI0223 23:49:10.007656 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/kdx 260\n" STEP: limiting log lines Feb 23 23:49:10.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8514 --tail=1' Feb 23 23:49:10.219: INFO: stderr: "" Feb 23 23:49:10.219: INFO: stdout: "I0223 23:49:10.207568 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/h87s 447\n" Feb 23 23:49:10.219: INFO: got output "I0223 23:49:10.207568 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/h87s 447\n" STEP: limiting log bytes Feb 23 23:49:10.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8514 --limit-bytes=1' Feb 23 23:49:10.369: INFO: stderr: "" Feb 23 23:49:10.369: INFO: stdout: "I" Feb 23 23:49:10.370: INFO: got output "I" STEP: exposing timestamps Feb 23 23:49:10.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8514 --tail=1 --timestamps' Feb 23 23:49:10.495: INFO: stderr: "" Feb 23 23:49:10.495: INFO: stdout: "2020-02-23T23:49:10.410728663Z I0223 23:49:10.408100 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/mm4 362\n" Feb 23 23:49:10.495: INFO: got output "2020-02-23T23:49:10.410728663Z I0223 23:49:10.408100 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/mm4 362\n" STEP: restricting to a time range Feb 23 23:49:12.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8514 --since=1s' Feb 23 23:49:13.172: INFO: stderr: "" Feb 23 23:49:13.172: INFO: stdout: "I0223 23:49:12.207620 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/zbz 448\nI0223 23:49:12.407631 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/ktmb 457\nI0223 23:49:12.607530 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/9ft 441\nI0223 23:49:12.807915 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/dkv 384\nI0223 23:49:13.007726 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/d6k 283\n" Feb 23 23:49:13.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8514 --since=24h' Feb 23 23:49:13.301: INFO: stderr: "" Feb 23 23:49:13.301: INFO: stdout: "I0223 23:49:08.607274 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/c5l 572\nI0223 23:49:08.807488 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/pcq2 405\nI0223 23:49:09.007712 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/xtp 543\nI0223 23:49:09.207653 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/skw 408\nI0223 23:49:09.407633 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/58mb 395\nI0223 23:49:09.607604 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/vx2d 467\nI0223 23:49:09.807638 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/56t9 504\nI0223 23:49:10.007656 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/kdx 260\nI0223 23:49:10.207568 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/h87s 447\nI0223 23:49:10.408100 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/mm4 362\nI0223 23:49:10.607674 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/8p4b 504\nI0223 23:49:10.807563 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/256 570\nI0223 23:49:11.007541 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/t29 367\nI0223 23:49:11.207581 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/2lh 568\nI0223 23:49:11.407700 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/nk5 433\nI0223 23:49:11.607647 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/652k 309\nI0223 23:49:11.807626 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/mwxj 359\nI0223 23:49:12.007561 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/q8z 488\nI0223 23:49:12.207620 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/zbz 448\nI0223 23:49:12.407631 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/ktmb 457\nI0223 23:49:12.607530 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/9ft 441\nI0223 23:49:12.807915 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/dkv 384\nI0223 23:49:13.007726 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/d6k 283\nI0223 23:49:13.207468 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/fwfg 407\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472 Feb 23 23:49:13.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8514' Feb 23 23:49:22.374: INFO: stderr: "" Feb 23 23:49:22.375: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:49:22.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8514" for this suite. • [SLOW TEST:21.003 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":280,"completed":48,"skipped":952,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:49:22.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:49:33.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6208" for this suite. • [SLOW TEST:11.214 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":49,"skipped":964,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:49:33.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-31939cea-9e8b-4321-bd6a-364a43175d3d STEP: Creating a pod to test consume configMaps Feb 23 23:49:33.766: INFO: Waiting up to 5m0s for pod "pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51" in namespace "configmap-1076" to be "success or failure" Feb 23 23:49:33.773: INFO: Pod "pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51": Phase="Pending", Reason="", readiness=false. Elapsed: 7.093604ms Feb 23 23:49:35.781: INFO: Pod "pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015097443s Feb 23 23:49:37.790: INFO: Pod "pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024241357s Feb 23 23:49:39.798: INFO: Pod "pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032117635s Feb 23 23:49:41.809: INFO: Pod "pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04281291s STEP: Saw pod success Feb 23 23:49:41.809: INFO: Pod "pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51" satisfied condition "success or failure" Feb 23 23:49:41.815: INFO: Trying to get logs from node jerma-node pod pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51 container configmap-volume-test: STEP: delete the pod Feb 23 23:49:42.029: INFO: Waiting for pod pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51 to disappear Feb 23 23:49:42.053: INFO: Pod pod-configmaps-993e9e28-b44b-4219-8ac8-8a686f0c1f51 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:49:42.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1076" for this suite. • [SLOW TEST:8.538 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":50,"skipped":994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:49:42.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3364.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3364.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 23 23:49:52.332: INFO: DNS probes using dns-3364/dns-test-3822c031-ed4c-410d-87d0-b967aba84148 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:49:52.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3364" for this suite. • [SLOW TEST:10.382 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":280,"completed":51,"skipped":1019,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:49:52.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 23 23:49:52.733: INFO: Waiting up to 5m0s for pod "pod-b81a809c-f12e-48bf-95b2-4fc369b868bb" in namespace "emptydir-2246" to be "success or failure" Feb 23 23:49:52.803: INFO: Pod "pod-b81a809c-f12e-48bf-95b2-4fc369b868bb": Phase="Pending", Reason="", readiness=false. Elapsed: 69.643434ms Feb 23 23:49:54.817: INFO: Pod "pod-b81a809c-f12e-48bf-95b2-4fc369b868bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084209048s Feb 23 23:49:56.829: INFO: Pod "pod-b81a809c-f12e-48bf-95b2-4fc369b868bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095836057s Feb 23 23:49:58.839: INFO: Pod "pod-b81a809c-f12e-48bf-95b2-4fc369b868bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105759952s Feb 23 23:50:00.849: INFO: Pod "pod-b81a809c-f12e-48bf-95b2-4fc369b868bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116141831s Feb 23 23:50:02.868: INFO: Pod "pod-b81a809c-f12e-48bf-95b2-4fc369b868bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134796437s STEP: Saw pod success Feb 23 23:50:02.868: INFO: Pod "pod-b81a809c-f12e-48bf-95b2-4fc369b868bb" satisfied condition "success or failure" Feb 23 23:50:02.886: INFO: Trying to get logs from node jerma-node pod pod-b81a809c-f12e-48bf-95b2-4fc369b868bb container test-container: STEP: delete the pod Feb 23 23:50:03.115: INFO: Waiting for pod pod-b81a809c-f12e-48bf-95b2-4fc369b868bb to disappear Feb 23 23:50:03.163: INFO: Pod pod-b81a809c-f12e-48bf-95b2-4fc369b868bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:50:03.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2246" for this suite. • [SLOW TEST:10.656 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":52,"skipped":1030,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:50:03.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-sx7k STEP: Creating a pod to test atomic-volume-subpath Feb 23 23:50:03.392: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-sx7k" in namespace "subpath-2608" to be "success or failure" Feb 23 23:50:03.405: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Pending", Reason="", readiness=false. Elapsed: 12.401071ms Feb 23 23:50:05.413: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020730027s Feb 23 23:50:07.424: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031457221s Feb 23 23:50:09.432: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039302485s Feb 23 23:50:11.438: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 8.045638973s Feb 23 23:50:13.456: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 10.063802011s Feb 23 23:50:15.462: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 12.069443656s Feb 23 23:50:17.468: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 14.075174012s Feb 23 23:50:19.476: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 16.083914534s Feb 23 23:50:21.486: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 18.09362403s Feb 23 23:50:23.493: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 20.100575531s Feb 23 23:50:25.498: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 22.106053853s Feb 23 23:50:27.506: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 24.113170341s Feb 23 23:50:29.511: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 26.118820614s Feb 23 23:50:31.519: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Running", Reason="", readiness=true. Elapsed: 28.126479035s Feb 23 23:50:33.804: INFO: Pod "pod-subpath-test-secret-sx7k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.411619552s STEP: Saw pod success Feb 23 23:50:33.804: INFO: Pod "pod-subpath-test-secret-sx7k" satisfied condition "success or failure" Feb 23 23:50:33.838: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-sx7k container test-container-subpath-secret-sx7k: STEP: delete the pod Feb 23 23:50:33.952: INFO: Waiting for pod pod-subpath-test-secret-sx7k to disappear Feb 23 23:50:33.959: INFO: Pod pod-subpath-test-secret-sx7k no longer exists STEP: Deleting pod pod-subpath-test-secret-sx7k Feb 23 23:50:33.959: INFO: Deleting pod "pod-subpath-test-secret-sx7k" in namespace "subpath-2608" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:50:33.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2608" for this suite. • [SLOW TEST:30.794 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":53,"skipped":1038,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:50:33.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 23 23:50:34.100: INFO: Waiting up to 5m0s for pod "busybox-user-65534-7f177966-f76b-4a92-9deb-f50b7047b5e4" in namespace "security-context-test-146" to be "success or failure" Feb 23 23:50:34.115: INFO: Pod "busybox-user-65534-7f177966-f76b-4a92-9deb-f50b7047b5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.640025ms Feb 23 23:50:36.121: INFO: Pod "busybox-user-65534-7f177966-f76b-4a92-9deb-f50b7047b5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020435943s Feb 23 23:50:38.128: INFO: Pod "busybox-user-65534-7f177966-f76b-4a92-9deb-f50b7047b5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027708577s Feb 23 23:50:40.151: INFO: Pod "busybox-user-65534-7f177966-f76b-4a92-9deb-f50b7047b5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050119617s Feb 23 23:50:42.276: INFO: Pod "busybox-user-65534-7f177966-f76b-4a92-9deb-f50b7047b5e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.175488059s Feb 23 23:50:42.276: INFO: Pod "busybox-user-65534-7f177966-f76b-4a92-9deb-f50b7047b5e4" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:50:42.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-146" for this suite. • [SLOW TEST:8.330 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":54,"skipped":1074,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:50:42.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-fd69bcdb-1444-426b-a074-465bcf7baed3 STEP: Creating a pod to test consume secrets Feb 23 23:50:42.440: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5" in namespace "projected-6773" to be "success or failure" Feb 23 23:50:43.374: INFO: Pod "pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5": Phase="Pending", Reason="", readiness=false. Elapsed: 933.434738ms Feb 23 23:50:45.382: INFO: Pod "pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.941394745s Feb 23 23:50:47.401: INFO: Pod "pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.961017002s Feb 23 23:50:49.408: INFO: Pod "pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.967781875s Feb 23 23:50:51.417: INFO: Pod "pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.976443974s STEP: Saw pod success Feb 23 23:50:51.417: INFO: Pod "pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5" satisfied condition "success or failure" Feb 23 23:50:51.422: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5 container projected-secret-volume-test: STEP: delete the pod Feb 23 23:50:51.521: INFO: Waiting for pod pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5 to disappear Feb 23 23:50:51.588: INFO: Pod pod-projected-secrets-6e61d37c-866a-4d07-b65b-52f0a86608e5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:50:51.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6773" for this suite. • [SLOW TEST:9.305 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":55,"skipped":1087,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:50:51.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 23 23:50:51.824: INFO: Waiting up to 5m0s for pod "pod-013ef8c2-1180-4150-8267-d762ff5485db" in namespace "emptydir-8237" to be "success or failure" Feb 23 23:50:51.831: INFO: Pod "pod-013ef8c2-1180-4150-8267-d762ff5485db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.712853ms Feb 23 23:50:54.079: INFO: Pod "pod-013ef8c2-1180-4150-8267-d762ff5485db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254523293s Feb 23 23:50:56.120: INFO: Pod "pod-013ef8c2-1180-4150-8267-d762ff5485db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295501085s Feb 23 23:50:58.128: INFO: Pod "pod-013ef8c2-1180-4150-8267-d762ff5485db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303670909s Feb 23 23:51:00.136: INFO: Pod "pod-013ef8c2-1180-4150-8267-d762ff5485db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.311811176s STEP: Saw pod success Feb 23 23:51:00.136: INFO: Pod "pod-013ef8c2-1180-4150-8267-d762ff5485db" satisfied condition "success or failure" Feb 23 23:51:00.140: INFO: Trying to get logs from node jerma-node pod pod-013ef8c2-1180-4150-8267-d762ff5485db container test-container: STEP: delete the pod Feb 23 23:51:00.589: INFO: Waiting for pod pod-013ef8c2-1180-4150-8267-d762ff5485db to disappear Feb 23 23:51:00.603: INFO: Pod pod-013ef8c2-1180-4150-8267-d762ff5485db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:51:00.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8237" for this suite. • [SLOW TEST:9.000 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":56,"skipped":1134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:51:00.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod test-webserver-53371135-085e-4401-a155-22d007678c7c in namespace container-probe-2860 Feb 23 23:51:09.005: INFO: Started pod test-webserver-53371135-085e-4401-a155-22d007678c7c in namespace container-probe-2860 STEP: checking the pod's current state and verifying that restartCount is present Feb 23 23:51:09.008: INFO: Initial restart count of pod test-webserver-53371135-085e-4401-a155-22d007678c7c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:55:10.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2860" for this suite. • [SLOW TEST:250.105 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":57,"skipped":1200,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:55:10.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-f9e464cc-95ab-459d-b4f3-b67366eab6d1 STEP: Creating a pod to test consume configMaps Feb 23 23:55:10.926: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53" in namespace "projected-1705" to be "success or failure" Feb 23 23:55:10.932: INFO: Pod "pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53": Phase="Pending", Reason="", readiness=false. Elapsed: 5.917823ms Feb 23 23:55:12.939: INFO: Pod "pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012671165s Feb 23 23:55:14.947: INFO: Pod "pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021349466s Feb 23 23:55:17.332: INFO: Pod "pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406054144s Feb 23 23:55:19.349: INFO: Pod "pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.423364343s Feb 23 23:55:21.357: INFO: Pod "pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.430834998s STEP: Saw pod success Feb 23 23:55:21.357: INFO: Pod "pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53" satisfied condition "success or failure" Feb 23 23:55:21.362: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53 container projected-configmap-volume-test: STEP: delete the pod Feb 23 23:55:21.650: INFO: Waiting for pod pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53 to disappear Feb 23 23:55:21.723: INFO: Pod pod-projected-configmaps-c32136a2-beac-4a57-8fc7-15daebbd7d53 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:55:21.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1705" for this suite. • [SLOW TEST:11.020 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":58,"skipped":1204,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:55:21.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 23 23:55:21.982: INFO: PodSpec: initContainers in spec.initContainers Feb 23 23:56:22.046: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e6a05445-1536-4b86-8bcd-efbacd80d6c3", GenerateName:"", Namespace:"init-container-6281", SelfLink:"/api/v1/namespaces/init-container-6281/pods/pod-init-e6a05445-1536-4b86-8bcd-efbacd80d6c3", UID:"2fe7674e-d17d-45bf-bd4e-69a0829a7e52", ResourceVersion:"10320647", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718098921, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"982494398"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-99t2r", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004d90000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-99t2r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-99t2r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-99t2r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b3e068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001e922a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b3e0f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b3e110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b3e118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b3e11c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098922, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098922, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098922, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718098921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0047f80a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b70070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b700e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://7802f6c8f04990c684122e749b676fe83d893463f5657c18933a64e01ec998ad", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0047f80e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0047f80c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001b3e22f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:56:22.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6281" for this suite. • [SLOW TEST:60.402 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":59,"skipped":1219,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:56:22.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Feb 23 23:56:22.275: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:56:40.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2454" for this suite. • [SLOW TEST:18.103 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":60,"skipped":1223,"failed":0} SSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:56:40.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:57:12.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5788" for this suite. • [SLOW TEST:32.161 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":61,"skipped":1226,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:57:12.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-5521/configmap-test-045d25e7-7fc8-43e3-a088-86a9d5c55716 STEP: Creating a pod to test consume configMaps Feb 23 23:57:12.606: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b" in namespace "configmap-5521" to be "success or failure" Feb 23 23:57:12.612: INFO: Pod "pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.888357ms Feb 23 23:57:14.619: INFO: Pod "pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01301906s Feb 23 23:57:16.632: INFO: Pod "pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025633228s Feb 23 23:57:18.658: INFO: Pod "pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052198696s Feb 23 23:57:20.664: INFO: Pod "pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058035312s STEP: Saw pod success Feb 23 23:57:20.664: INFO: Pod "pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b" satisfied condition "success or failure" Feb 23 23:57:20.667: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b container env-test: STEP: delete the pod Feb 23 23:57:20.728: INFO: Waiting for pod pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b to disappear Feb 23 23:57:20.739: INFO: Pod pod-configmaps-d8ecd490-f045-47fa-ab7e-c24980edc59b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:57:20.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5521" for this suite. • [SLOW TEST:8.413 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":62,"skipped":1229,"failed":0} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:57:20.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 23 23:57:31.158: INFO: Waiting up to 5m0s for pod "client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba" in namespace "pods-1602" to be "success or failure" Feb 23 23:57:31.231: INFO: Pod "client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba": Phase="Pending", Reason="", readiness=false. Elapsed: 72.554849ms Feb 23 23:57:33.242: INFO: Pod "client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083149348s Feb 23 23:57:35.253: INFO: Pod "client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094170106s Feb 23 23:57:37.261: INFO: Pod "client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102628892s Feb 23 23:57:39.269: INFO: Pod "client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110386892s STEP: Saw pod success Feb 23 23:57:39.269: INFO: Pod "client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba" satisfied condition "success or failure" Feb 23 23:57:39.273: INFO: Trying to get logs from node jerma-node pod client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba container env3cont: STEP: delete the pod Feb 23 23:57:39.314: INFO: Waiting for pod client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba to disappear Feb 23 23:57:39.333: INFO: Pod client-envvars-2137765f-1b25-49a2-a5fa-428b2bee92ba no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:57:39.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1602" for this suite. • [SLOW TEST:18.547 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":63,"skipped":1230,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:57:39.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting the proxy server Feb 23 23:57:39.452: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 23 23:57:39.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8437" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":280,"completed":64,"skipped":1249,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 23 23:57:39.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5662 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Feb 23 23:57:39.737: INFO: Found 0 stateful pods, waiting for 3 Feb 23 23:57:49.744: INFO: Found 1 stateful pods, waiting for 3 Feb 23 23:57:59.782: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 23 23:57:59.783: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 23 23:57:59.783: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 23 23:58:09.771: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 23 23:58:09.771: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 23 23:58:09.771: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 23 23:58:09.821: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 23 23:58:19.885: INFO: Updating stateful set ss2 Feb 23 23:58:19.909: INFO: Waiting for Pod statefulset-5662/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 23 23:58:29.924: INFO: Waiting for Pod statefulset-5662/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Feb 23 23:58:40.508: INFO: Found 2 stateful pods, waiting for 3 Feb 23 23:58:50.519: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 23 23:58:50.519: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 23 23:58:50.519: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 23 23:59:00.523: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 23 23:59:00.523: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 23 23:59:00.523: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 23 23:59:00.571: INFO: Updating stateful set ss2 Feb 23 23:59:00.580: INFO: Waiting for Pod statefulset-5662/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 23 23:59:11.187: INFO: Updating stateful set ss2 Feb 23 23:59:11.217: INFO: Waiting for StatefulSet statefulset-5662/ss2 to complete update Feb 23 23:59:11.217: INFO: Waiting for Pod statefulset-5662/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 23 23:59:21.232: INFO: Waiting for StatefulSet statefulset-5662/ss2 to complete update Feb 23 23:59:21.233: INFO: Waiting for Pod statefulset-5662/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 23 23:59:31.232: INFO: Deleting all statefulset in ns statefulset-5662 Feb 23 23:59:31.237: INFO: Scaling statefulset ss2 to 0 Feb 24 00:00:01.297: INFO: Waiting for statefulset status.replicas updated to 0 Feb 24 00:00:01.303: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:00:01.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5662" for this suite. • [SLOW TEST:141.800 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":65,"skipped":1267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:00:01.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-1889ec27-823c-4d9a-84b8-92b86964ad1f STEP: Creating a pod to test consume configMaps Feb 24 00:00:01.571: INFO: Waiting up to 5m0s for pod "pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e" in namespace "configmap-6727" to be "success or failure" Feb 24 00:00:01.597: INFO: Pod "pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.28559ms Feb 24 00:00:03.605: INFO: Pod "pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033962416s Feb 24 00:00:05.612: INFO: Pod "pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040620098s Feb 24 00:00:07.646: INFO: Pod "pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07490166s Feb 24 00:00:09.653: INFO: Pod "pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081761233s STEP: Saw pod success Feb 24 00:00:09.653: INFO: Pod "pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e" satisfied condition "success or failure" Feb 24 00:00:09.658: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e container configmap-volume-test: STEP: delete the pod Feb 24 00:00:09.804: INFO: Waiting for pod pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e to disappear Feb 24 00:00:09.814: INFO: Pod pod-configmaps-d20d93d1-ec44-433b-b684-b970f68ddc3e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:00:09.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6727" for this suite. • [SLOW TEST:8.427 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":66,"skipped":1305,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:00:09.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 24 00:00:11.116: INFO: Pod name wrapped-volume-race-d0ed930e-ec91-4b79-b3ce-c61c51d4a9ac: Found 0 pods out of 5 Feb 24 00:00:16.148: INFO: Pod name wrapped-volume-race-d0ed930e-ec91-4b79-b3ce-c61c51d4a9ac: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d0ed930e-ec91-4b79-b3ce-c61c51d4a9ac in namespace emptydir-wrapper-5742, will wait for the garbage collector to delete the pods Feb 24 00:00:44.291: INFO: Deleting ReplicationController wrapped-volume-race-d0ed930e-ec91-4b79-b3ce-c61c51d4a9ac took: 34.158776ms Feb 24 00:00:44.793: INFO: Terminating ReplicationController wrapped-volume-race-d0ed930e-ec91-4b79-b3ce-c61c51d4a9ac pods took: 501.83643ms STEP: Creating RC which spawns configmap-volume pods Feb 24 00:01:03.431: INFO: Pod name wrapped-volume-race-ba763e48-df79-4fdd-a6f9-6590577f5a82: Found 0 pods out of 5 Feb 24 00:01:08.440: INFO: Pod name wrapped-volume-race-ba763e48-df79-4fdd-a6f9-6590577f5a82: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ba763e48-df79-4fdd-a6f9-6590577f5a82 in namespace emptydir-wrapper-5742, will wait for the garbage collector to delete the pods Feb 24 00:01:42.581: INFO: Deleting ReplicationController wrapped-volume-race-ba763e48-df79-4fdd-a6f9-6590577f5a82 took: 40.566751ms Feb 24 00:01:42.982: INFO: Terminating ReplicationController wrapped-volume-race-ba763e48-df79-4fdd-a6f9-6590577f5a82 pods took: 400.725145ms STEP: Creating RC which spawns configmap-volume pods Feb 24 00:01:56.275: INFO: Pod name wrapped-volume-race-1399c216-8057-41b8-be53-58316b688630: Found 0 pods out of 5 Feb 24 00:02:01.294: INFO: Pod name wrapped-volume-race-1399c216-8057-41b8-be53-58316b688630: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1399c216-8057-41b8-be53-58316b688630 in namespace emptydir-wrapper-5742, will wait for the garbage collector to delete the pods Feb 24 00:02:35.514: INFO: Deleting ReplicationController wrapped-volume-race-1399c216-8057-41b8-be53-58316b688630 took: 123.476241ms Feb 24 00:02:35.914: INFO: Terminating ReplicationController wrapped-volume-race-1399c216-8057-41b8-be53-58316b688630 pods took: 400.512856ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:02:55.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5742" for this suite. • [SLOW TEST:165.904 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":67,"skipped":1320,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:02:55.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 24 00:02:56.802: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 24 00:02:58.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:03:00.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:03:02.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099376, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 24 00:03:05.871: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Feb 24 00:03:13.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4270 to-be-attached-pod -i -c=container1' Feb 24 00:03:17.202: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:03:18.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4270" for this suite. STEP: Destroying namespace "webhook-4270-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:24.896 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":68,"skipped":1320,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:03:20.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 24 00:03:20.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760" in namespace "projected-3494" to be "success or failure" Feb 24 00:03:20.715: INFO: Pod "downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760": Phase="Pending", Reason="", readiness=false. Elapsed: 3.363088ms Feb 24 00:03:22.726: INFO: Pod "downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014289091s Feb 24 00:03:24.736: INFO: Pod "downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024273167s Feb 24 00:03:26.743: INFO: Pod "downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031579722s Feb 24 00:03:28.755: INFO: Pod "downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042800523s Feb 24 00:03:30.762: INFO: Pod "downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050682929s STEP: Saw pod success Feb 24 00:03:30.763: INFO: Pod "downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760" satisfied condition "success or failure" Feb 24 00:03:30.767: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760 container client-container: STEP: delete the pod Feb 24 00:03:31.022: INFO: Waiting for pod downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760 to disappear Feb 24 00:03:31.036: INFO: Pod downwardapi-volume-95f635d3-f108-466d-baea-1c284ddd5760 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:03:31.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3494" for this suite. • [SLOW TEST:10.424 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":69,"skipped":1320,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:03:31.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-0d26d833-2714-404b-b018-45cebf66ccfb STEP: Creating a pod to test consume secrets Feb 24 00:03:31.288: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba" in namespace "projected-258" to be "success or failure" Feb 24 00:03:31.335: INFO: Pod "pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba": Phase="Pending", Reason="", readiness=false. Elapsed: 46.427849ms Feb 24 00:03:33.343: INFO: Pod "pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055005726s Feb 24 00:03:35.353: INFO: Pod "pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065145814s Feb 24 00:03:37.365: INFO: Pod "pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076571659s Feb 24 00:03:39.378: INFO: Pod "pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089407157s STEP: Saw pod success Feb 24 00:03:39.378: INFO: Pod "pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba" satisfied condition "success or failure" Feb 24 00:03:39.382: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba container projected-secret-volume-test: STEP: delete the pod Feb 24 00:03:39.496: INFO: Waiting for pod pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba to disappear Feb 24 00:03:39.519: INFO: Pod pod-projected-secrets-0ff7101e-a246-463b-88e3-5da708c656ba no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:03:39.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-258" for this suite. • [SLOW TEST:8.475 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":70,"skipped":1324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:03:39.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-e8d6234f-0656-46d9-8b44-caf17637d097 in namespace container-probe-2245 Feb 24 00:03:47.865: INFO: Started pod busybox-e8d6234f-0656-46d9-8b44-caf17637d097 in namespace container-probe-2245 STEP: checking the pod's current state and verifying that restartCount is present Feb 24 00:03:47.870: INFO: Initial restart count of pod busybox-e8d6234f-0656-46d9-8b44-caf17637d097 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:07:49.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2245" for this suite. • [SLOW TEST:249.771 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":71,"skipped":1367,"failed":0} SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:07:49.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating pod Feb 24 00:07:57.448: INFO: Pod pod-hostip-3b514fb8-aee8-4ea9-8035-ac9cf23b3069 has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:07:57.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9302" for this suite. • [SLOW TEST:8.161 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":72,"skipped":1369,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:07:57.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 24 00:07:57.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9415' Feb 24 00:07:57.805: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 24 00:07:57.805: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Feb 24 00:07:57.841: INFO: scanned /root for discovery docs: Feb 24 00:07:57.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9415' Feb 24 00:08:21.610: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 24 00:08:21.611: INFO: stdout: "Created e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770\nScaling up e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Feb 24 00:08:21.611: INFO: stdout: "Created e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770\nScaling up e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Feb 24 00:08:21.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9415' Feb 24 00:08:21.769: INFO: stderr: "" Feb 24 00:08:21.769: INFO: stdout: "e2e-test-httpd-rc-f9wh2 e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770-xrppd " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Feb 24 00:08:26.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9415' Feb 24 00:08:26.918: INFO: stderr: "" Feb 24 00:08:26.918: INFO: stdout: "e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770-xrppd " Feb 24 00:08:26.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770-xrppd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9415' Feb 24 00:08:27.070: INFO: stderr: "" Feb 24 00:08:27.070: INFO: stdout: "true" Feb 24 00:08:27.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770-xrppd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9415' Feb 24 00:08:27.160: INFO: stderr: "" Feb 24 00:08:27.160: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Feb 24 00:08:27.160: INFO: e2e-test-httpd-rc-fe5553998e9abcd2e12e8cab7dec3770-xrppd is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 Feb 24 00:08:27.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9415' Feb 24 00:08:27.313: INFO: stderr: "" Feb 24 00:08:27.313: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:08:27.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9415" for this suite. • [SLOW TEST:29.867 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":280,"completed":73,"skipped":1372,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:08:27.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-c9d15116-76aa-42e0-9faf-f96484caa32b STEP: Creating a pod to test consume secrets Feb 24 00:08:27.473: INFO: Waiting up to 5m0s for pod "pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9" in namespace "secrets-5007" to be "success or failure" Feb 24 00:08:27.497: INFO: Pod "pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.924192ms Feb 24 00:08:29.508: INFO: Pod "pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034889155s Feb 24 00:08:31.517: INFO: Pod "pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043776156s Feb 24 00:08:33.526: INFO: Pod "pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052407667s Feb 24 00:08:35.532: INFO: Pod "pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059126449s Feb 24 00:08:37.540: INFO: Pod "pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066783336s STEP: Saw pod success Feb 24 00:08:37.541: INFO: Pod "pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9" satisfied condition "success or failure" Feb 24 00:08:37.545: INFO: Trying to get logs from node jerma-node pod pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9 container secret-volume-test: STEP: delete the pod Feb 24 00:08:37.673: INFO: Waiting for pod pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9 to disappear Feb 24 00:08:37.683: INFO: Pod pod-secrets-61e2dd17-ddb2-4dfe-9e4d-56c6e08311a9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:08:37.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5007" for this suite. • [SLOW TEST:10.364 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":74,"skipped":1380,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:08:37.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 24 00:08:38.416: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 24 00:08:40.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:08:42.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:08:44.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:08:46.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718099718, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 24 00:08:49.629: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 24 00:08:49.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:08:51.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3179" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.482 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":75,"skipped":1382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:08:51.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384 STEP: creating the pod Feb 24 00:08:51.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1260' Feb 24 00:08:51.903: INFO: stderr: "" Feb 24 00:08:51.903: INFO: stdout: "pod/pause created\n" Feb 24 00:08:51.903: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 24 00:08:51.904: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1260" to be "running and ready" Feb 24 00:08:51.913: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.961586ms Feb 24 00:08:53.926: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022225534s Feb 24 00:08:55.942: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038194166s Feb 24 00:08:57.953: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049677658s Feb 24 00:08:59.961: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.057229198s Feb 24 00:08:59.961: INFO: Pod "pause" satisfied condition "running and ready" Feb 24 00:08:59.961: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: adding the label testing-label with value testing-label-value to a pod Feb 24 00:08:59.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1260' Feb 24 00:09:00.153: INFO: stderr: "" Feb 24 00:09:00.153: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 24 00:09:00.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1260' Feb 24 00:09:00.264: INFO: stderr: "" Feb 24 00:09:00.264: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 24 00:09:00.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1260' Feb 24 00:09:00.393: INFO: stderr: "" Feb 24 00:09:00.393: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 24 00:09:00.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1260' Feb 24 00:09:00.515: INFO: stderr: "" Feb 24 00:09:00.515: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 STEP: using delete to clean up resources Feb 24 00:09:00.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1260' Feb 24 00:09:00.671: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 00:09:00.672: INFO: stdout: "pod \"pause\" force deleted\n" Feb 24 00:09:00.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1260' Feb 24 00:09:00.796: INFO: stderr: "No resources found in kubectl-1260 namespace.\n" Feb 24 00:09:00.797: INFO: stdout: "" Feb 24 00:09:00.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1260 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 24 00:09:00.886: INFO: stderr: "" Feb 24 00:09:00.887: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:09:00.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1260" for this suite. • [SLOW TEST:9.714 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":280,"completed":76,"skipped":1434,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:09:00.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Feb 24 00:09:01.108: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Feb 24 00:09:13.662: INFO: >>> kubeConfig: /root/.kube/config Feb 24 00:09:15.694: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:09:28.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7012" for this suite. • [SLOW TEST:27.601 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":77,"skipped":1442,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:09:28.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 24 00:09:28.583: INFO: Waiting up to 5m0s for pod "pod-db507975-b228-4c02-a986-e3123c6623f2" in namespace "emptydir-9659" to be "success or failure" Feb 24 00:09:28.595: INFO: Pod "pod-db507975-b228-4c02-a986-e3123c6623f2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.922748ms Feb 24 00:09:30.606: INFO: Pod "pod-db507975-b228-4c02-a986-e3123c6623f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022335211s Feb 24 00:09:32.615: INFO: Pod "pod-db507975-b228-4c02-a986-e3123c6623f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031605957s Feb 24 00:09:34.623: INFO: Pod "pod-db507975-b228-4c02-a986-e3123c6623f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039741223s Feb 24 00:09:36.636: INFO: Pod "pod-db507975-b228-4c02-a986-e3123c6623f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052537075s STEP: Saw pod success Feb 24 00:09:36.637: INFO: Pod "pod-db507975-b228-4c02-a986-e3123c6623f2" satisfied condition "success or failure" Feb 24 00:09:36.643: INFO: Trying to get logs from node jerma-node pod pod-db507975-b228-4c02-a986-e3123c6623f2 container test-container: STEP: delete the pod Feb 24 00:09:36.776: INFO: Waiting for pod pod-db507975-b228-4c02-a986-e3123c6623f2 to disappear Feb 24 00:09:36.807: INFO: Pod pod-db507975-b228-4c02-a986-e3123c6623f2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:09:36.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9659" for this suite. • [SLOW TEST:8.320 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":78,"skipped":1458,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:09:36.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-a867aa9e-4110-4756-82db-1a88df478740 STEP: Creating secret with name s-test-opt-upd-15d3e98f-2e80-424e-9a1b-80f12f2637a2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a867aa9e-4110-4756-82db-1a88df478740 STEP: Updating secret s-test-opt-upd-15d3e98f-2e80-424e-9a1b-80f12f2637a2 STEP: Creating secret with name s-test-opt-create-defbd3d2-6f04-4da9-9088-f35d0c175ab5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:11:16.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2483" for this suite. • [SLOW TEST:99.420 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":79,"skipped":1477,"failed":0} [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:11:16.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0224 00:11:27.536611 10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 24 00:11:27.536: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:11:27.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4546" for this suite. • [SLOW TEST:11.584 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":80,"skipped":1477,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:11:27.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 24 00:11:43.844: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:11:43.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2405" for this suite. • [SLOW TEST:16.102 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":81,"skipped":1484,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:11:43.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name projected-secret-test-48e84d97-0b57-47e7-9b92-d0e7d870ebcd STEP: Creating a pod to test consume secrets Feb 24 00:11:44.143: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc" in namespace "projected-1978" to be "success or failure" Feb 24 00:11:44.160: INFO: Pod "pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.427098ms Feb 24 00:11:46.168: INFO: Pod "pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024949814s Feb 24 00:11:48.180: INFO: Pod "pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037285029s Feb 24 00:11:50.192: INFO: Pod "pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048853378s Feb 24 00:11:52.199: INFO: Pod "pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056623174s Feb 24 00:11:54.206: INFO: Pod "pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062749492s Feb 24 00:11:56.225: INFO: Pod "pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.081812554s STEP: Saw pod success Feb 24 00:11:56.225: INFO: Pod "pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc" satisfied condition "success or failure" Feb 24 00:11:56.229: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc container secret-volume-test: STEP: delete the pod Feb 24 00:11:56.503: INFO: Waiting for pod pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc to disappear Feb 24 00:11:56.521: INFO: Pod pod-projected-secrets-8136c149-b8d4-450e-8702-75fe09a22efc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:11:56.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1978" for this suite. • [SLOW TEST:12.613 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":82,"skipped":1486,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:11:56.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-1d9afdf7-1704-4d92-a37c-c2125624ae77 STEP: Creating a pod to test consume secrets Feb 24 00:11:56.871: INFO: Waiting up to 5m0s for pod "pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52" in namespace "secrets-625" to be "success or failure" Feb 24 00:11:56.914: INFO: Pod "pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52": Phase="Pending", Reason="", readiness=false. Elapsed: 42.058154ms Feb 24 00:11:58.923: INFO: Pod "pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051107933s Feb 24 00:12:00.937: INFO: Pod "pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065405492s Feb 24 00:12:02.942: INFO: Pod "pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070402291s Feb 24 00:12:04.948: INFO: Pod "pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076751256s STEP: Saw pod success Feb 24 00:12:04.948: INFO: Pod "pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52" satisfied condition "success or failure" Feb 24 00:12:04.956: INFO: Trying to get logs from node jerma-node pod pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52 container secret-volume-test: STEP: delete the pod Feb 24 00:12:05.065: INFO: Waiting for pod pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52 to disappear Feb 24 00:12:05.078: INFO: Pod pod-secrets-47214ff9-5ead-40bb-a087-e894478f2c52 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:12:05.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-625" for this suite. • [SLOW TEST:8.595 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":83,"skipped":1491,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:12:05.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-projected-vbps STEP: Creating a pod to test atomic-volume-subpath Feb 24 00:12:05.285: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vbps" in namespace "subpath-1686" to be "success or failure" Feb 24 00:12:05.338: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Pending", Reason="", readiness=false. Elapsed: 53.548428ms Feb 24 00:12:07.345: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060147644s Feb 24 00:12:09.351: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066148763s Feb 24 00:12:12.057: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Pending", Reason="", readiness=false. Elapsed: 6.771779108s Feb 24 00:12:14.072: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Pending", Reason="", readiness=false. Elapsed: 8.786962522s Feb 24 00:12:16.078: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 10.793759371s Feb 24 00:12:18.084: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 12.799070587s Feb 24 00:12:20.114: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 14.829631732s Feb 24 00:12:22.125: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 16.840162053s Feb 24 00:12:24.132: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 18.847287415s Feb 24 00:12:26.136: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 20.851737777s Feb 24 00:12:28.144: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 22.859533949s Feb 24 00:12:30.152: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 24.867145736s Feb 24 00:12:32.160: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 26.87497761s Feb 24 00:12:34.171: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Running", Reason="", readiness=true. Elapsed: 28.886470619s Feb 24 00:12:36.946: INFO: Pod "pod-subpath-test-projected-vbps": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.661039219s STEP: Saw pod success Feb 24 00:12:36.946: INFO: Pod "pod-subpath-test-projected-vbps" satisfied condition "success or failure" Feb 24 00:12:36.958: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-vbps container test-container-subpath-projected-vbps: STEP: delete the pod Feb 24 00:12:37.021: INFO: Waiting for pod pod-subpath-test-projected-vbps to disappear Feb 24 00:12:37.029: INFO: Pod pod-subpath-test-projected-vbps no longer exists STEP: Deleting pod pod-subpath-test-projected-vbps Feb 24 00:12:37.029: INFO: Deleting pod "pod-subpath-test-projected-vbps" in namespace "subpath-1686" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:12:37.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1686" for this suite. • [SLOW TEST:31.978 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":84,"skipped":1492,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:12:37.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-fafa11bf-b62d-44d8-80b6-f95fb92dc798 STEP: Creating a pod to test consume secrets Feb 24 00:12:37.288: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5" in namespace "projected-151" to be "success or failure" Feb 24 00:12:37.330: INFO: Pod "pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 41.758108ms Feb 24 00:12:39.340: INFO: Pod "pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051374504s Feb 24 00:12:41.348: INFO: Pod "pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059086269s Feb 24 00:12:43.372: INFO: Pod "pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083676481s Feb 24 00:12:45.421: INFO: Pod "pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132276264s STEP: Saw pod success Feb 24 00:12:45.421: INFO: Pod "pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5" satisfied condition "success or failure" Feb 24 00:12:45.424: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5 container projected-secret-volume-test: STEP: delete the pod Feb 24 00:12:45.470: INFO: Waiting for pod pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5 to disappear Feb 24 00:12:45.487: INFO: Pod pod-projected-secrets-61c5779a-74d3-47bc-8992-e9cd09a50dd5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:12:45.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-151" for this suite. • [SLOW TEST:8.381 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":85,"skipped":1495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:12:45.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token Feb 24 00:12:46.281: INFO: created pod pod-service-account-defaultsa Feb 24 00:12:46.281: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 24 00:12:46.417: INFO: created pod pod-service-account-mountsa Feb 24 00:12:46.417: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 24 00:12:46.448: INFO: created pod pod-service-account-nomountsa Feb 24 00:12:46.448: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 24 00:12:46.634: INFO: created pod pod-service-account-defaultsa-mountspec Feb 24 00:12:46.635: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 24 00:12:46.664: INFO: created pod pod-service-account-mountsa-mountspec Feb 24 00:12:46.664: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 24 00:12:46.711: INFO: created pod pod-service-account-nomountsa-mountspec Feb 24 00:12:46.711: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 24 00:12:48.729: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 24 00:12:48.729: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 24 00:12:48.779: INFO: created pod pod-service-account-mountsa-nomountspec Feb 24 00:12:48.779: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 24 00:12:50.513: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 24 00:12:50.513: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:12:50.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1703" for this suite. • [SLOW TEST:6.439 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":280,"completed":86,"skipped":1522,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:12:51.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 24 00:13:20.849: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:13:20.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6590" for this suite. • [SLOW TEST:29.029 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":87,"skipped":1522,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:13:20.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-b8f08414-cfe2-4572-a98a-c4c5c3392f99 STEP: Creating a pod to test consume secrets Feb 24 00:13:21.246: INFO: Waiting up to 5m0s for pod "pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980" in namespace "secrets-1029" to be "success or failure" Feb 24 00:13:21.264: INFO: Pod "pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980": Phase="Pending", Reason="", readiness=false. Elapsed: 17.737127ms Feb 24 00:13:23.271: INFO: Pod "pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025509019s Feb 24 00:13:25.277: INFO: Pod "pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031611569s Feb 24 00:13:27.285: INFO: Pod "pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039095786s Feb 24 00:13:29.291: INFO: Pod "pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044964425s Feb 24 00:13:31.301: INFO: Pod "pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055385353s STEP: Saw pod success Feb 24 00:13:31.301: INFO: Pod "pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980" satisfied condition "success or failure" Feb 24 00:13:31.306: INFO: Trying to get logs from node jerma-node pod pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980 container secret-volume-test: STEP: delete the pod Feb 24 00:13:31.383: INFO: Waiting for pod pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980 to disappear Feb 24 00:13:31.390: INFO: Pod pod-secrets-e751f6f3-f871-4da3-a132-2124d3da0980 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:13:31.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1029" for this suite. • [SLOW TEST:10.431 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":88,"skipped":1524,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:13:31.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 24 00:13:32.699: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 24 00:13:34.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:13:36.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:13:38.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:13:40.726: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100012, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 24 00:13:43.797: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:13:43.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-722" for this suite. STEP: Destroying namespace "webhook-722-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.841 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":89,"skipped":1536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:13:44.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 24 00:13:46.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Feb 24 00:13:48.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:13:50.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:13:52.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:13:54.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:13:56.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100026, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 24 00:13:59.887: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:13:59.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5661" for this suite. STEP: Destroying namespace "webhook-5661-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.983 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":90,"skipped":1563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:14:00.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1668.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1668.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1668.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1668.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1668.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1668.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1668.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1668.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1668.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1668.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 20.161.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.161.20_udp@PTR;check="$$(dig +tcp +noall +answer +search 20.161.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.161.20_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1668.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1668.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1668.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1668.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1668.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1668.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1668.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1668.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1668.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1668.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1668.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 20.161.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.161.20_udp@PTR;check="$$(dig +tcp +noall +answer +search 20.161.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.161.20_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 24 00:14:14.472: INFO: Unable to read wheezy_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:14.482: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:14.487: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:14.490: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:14.523: INFO: Unable to read jessie_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:14.527: INFO: Unable to read jessie_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:14.531: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:14.534: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:14.552: INFO: Lookups using dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba failed for: [wheezy_udp@dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_udp@dns-test-service.dns-1668.svc.cluster.local jessie_tcp@dns-test-service.dns-1668.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local] Feb 24 00:14:19.563: INFO: Unable to read wheezy_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:19.569: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:19.572: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:19.577: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:19.658: INFO: Unable to read jessie_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:19.663: INFO: Unable to read jessie_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:19.673: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:19.679: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:19.710: INFO: Lookups using dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba failed for: [wheezy_udp@dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_udp@dns-test-service.dns-1668.svc.cluster.local jessie_tcp@dns-test-service.dns-1668.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local] Feb 24 00:14:24.570: INFO: Unable to read wheezy_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:24.585: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:24.590: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:24.597: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:24.651: INFO: Unable to read jessie_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:24.655: INFO: Unable to read jessie_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:24.661: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:24.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:24.692: INFO: Lookups using dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba failed for: [wheezy_udp@dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_udp@dns-test-service.dns-1668.svc.cluster.local jessie_tcp@dns-test-service.dns-1668.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local] Feb 24 00:14:29.563: INFO: Unable to read wheezy_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:29.570: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:29.574: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:29.578: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:29.610: INFO: Unable to read jessie_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:29.615: INFO: Unable to read jessie_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:29.619: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:29.623: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:29.642: INFO: Lookups using dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba failed for: [wheezy_udp@dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_udp@dns-test-service.dns-1668.svc.cluster.local jessie_tcp@dns-test-service.dns-1668.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local] Feb 24 00:14:34.584: INFO: Unable to read wheezy_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:34.601: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:34.620: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:34.627: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:34.663: INFO: Unable to read jessie_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:34.667: INFO: Unable to read jessie_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:34.670: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:34.673: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:34.695: INFO: Lookups using dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba failed for: [wheezy_udp@dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_udp@dns-test-service.dns-1668.svc.cluster.local jessie_tcp@dns-test-service.dns-1668.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local] Feb 24 00:14:39.564: INFO: Unable to read wheezy_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:39.570: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:39.578: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:39.585: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:39.623: INFO: Unable to read jessie_udp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:39.628: INFO: Unable to read jessie_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:39.632: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:39.637: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:39.661: INFO: Lookups using dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba failed for: [wheezy_udp@dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_udp@dns-test-service.dns-1668.svc.cluster.local jessie_tcp@dns-test-service.dns-1668.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1668.svc.cluster.local] Feb 24 00:14:44.997: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local from pod dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba: the server could not find the requested resource (get pods dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba) Feb 24 00:14:45.295: INFO: Lookups using dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba failed for: [wheezy_tcp@dns-test-service.dns-1668.svc.cluster.local] Feb 24 00:14:49.687: INFO: DNS probes using dns-1668/dns-test-cdfc6feb-5ddb-4821-a0ca-d9c2cfe899ba succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:14:50.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1668" for this suite. • [SLOW TEST:49.880 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":280,"completed":91,"skipped":1594,"failed":0} [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:14:50.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:15:50.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9172" for this suite. • [SLOW TEST:60.173 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":92,"skipped":1594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:15:50.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:16:02.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1741" for this suite. • [SLOW TEST:12.287 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":93,"skipped":1656,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:16:02.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 24 00:16:02.740: INFO: Waiting up to 5m0s for pod "downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7" in namespace "downward-api-2830" to be "success or failure" Feb 24 00:16:02.763: INFO: Pod "downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.979188ms Feb 24 00:16:04.777: INFO: Pod "downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03757079s Feb 24 00:16:06.914: INFO: Pod "downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173916156s Feb 24 00:16:08.924: INFO: Pod "downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18392699s Feb 24 00:16:10.933: INFO: Pod "downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.193265109s STEP: Saw pod success Feb 24 00:16:10.933: INFO: Pod "downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7" satisfied condition "success or failure" Feb 24 00:16:10.939: INFO: Trying to get logs from node jerma-node pod downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7 container dapi-container: STEP: delete the pod Feb 24 00:16:11.149: INFO: Waiting for pod downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7 to disappear Feb 24 00:16:11.166: INFO: Pod downward-api-4a44fd6f-1ea0-4c9f-bea0-3c1e1a4394c7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:16:11.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2830" for this suite. • [SLOW TEST:8.604 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":94,"skipped":1657,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:16:11.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 24 00:16:11.354: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:16:26.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3898" for this suite. • [SLOW TEST:15.223 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":95,"skipped":1678,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:16:26.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 24 00:16:26.565: INFO: Create a RollingUpdate DaemonSet Feb 24 00:16:26.569: INFO: Check that daemon pods launch on every node of the cluster Feb 24 00:16:26.638: INFO: Number of nodes with available pods: 0 Feb 24 00:16:26.638: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:16:27.663: INFO: Number of nodes with available pods: 0 Feb 24 00:16:27.663: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:16:28.941: INFO: Number of nodes with available pods: 0 Feb 24 00:16:28.941: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:16:29.655: INFO: Number of nodes with available pods: 0 Feb 24 00:16:29.655: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:16:30.660: INFO: Number of nodes with available pods: 0 Feb 24 00:16:30.661: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:16:31.656: INFO: Number of nodes with available pods: 0 Feb 24 00:16:31.656: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:16:34.175: INFO: Number of nodes with available pods: 0 Feb 24 00:16:34.175: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:16:35.096: INFO: Number of nodes with available pods: 0 Feb 24 00:16:35.096: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:16:36.020: INFO: Number of nodes with available pods: 0 Feb 24 00:16:36.020: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:16:36.667: INFO: Number of nodes with available pods: 1 Feb 24 00:16:36.668: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:16:37.655: INFO: Number of nodes with available pods: 1 Feb 24 00:16:37.655: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:16:38.659: INFO: Number of nodes with available pods: 2 Feb 24 00:16:38.660: INFO: Number of running nodes: 2, number of available pods: 2 Feb 24 00:16:38.660: INFO: Update the DaemonSet to trigger a rollout Feb 24 00:16:38.724: INFO: Updating DaemonSet daemon-set Feb 24 00:16:52.788: INFO: Roll back the DaemonSet before rollout is complete Feb 24 00:16:52.794: INFO: Updating DaemonSet daemon-set Feb 24 00:16:52.794: INFO: Make sure DaemonSet rollback is complete Feb 24 00:16:52.807: INFO: Wrong image for pod: daemon-set-65rsf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 24 00:16:52.807: INFO: Pod daemon-set-65rsf is not available Feb 24 00:16:53.822: INFO: Wrong image for pod: daemon-set-65rsf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 24 00:16:53.822: INFO: Pod daemon-set-65rsf is not available Feb 24 00:16:54.827: INFO: Wrong image for pod: daemon-set-65rsf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 24 00:16:54.827: INFO: Pod daemon-set-65rsf is not available Feb 24 00:16:55.828: INFO: Pod daemon-set-h8gj5 is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1859, will wait for the garbage collector to delete the pods Feb 24 00:16:55.933: INFO: Deleting DaemonSet.extensions daemon-set took: 6.796506ms Feb 24 00:16:56.234: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.515373ms Feb 24 00:17:06.059: INFO: Number of nodes with available pods: 0 Feb 24 00:17:06.060: INFO: Number of running nodes: 0, number of available pods: 0 Feb 24 00:17:06.070: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1859/daemonsets","resourceVersion":"10325878"},"items":null} Feb 24 00:17:06.073: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1859/pods","resourceVersion":"10325878"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:17:06.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1859" for this suite. • [SLOW TEST:39.693 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":96,"skipped":1685,"failed":0} [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:17:06.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 24 00:17:06.205: INFO: Waiting up to 5m0s for pod "downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef" in namespace "downward-api-8364" to be "success or failure" Feb 24 00:17:06.209: INFO: Pod "downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53355ms Feb 24 00:17:08.217: INFO: Pod "downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012517066s Feb 24 00:17:10.225: INFO: Pod "downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020591034s Feb 24 00:17:12.264: INFO: Pod "downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059634861s Feb 24 00:17:14.272: INFO: Pod "downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066952426s STEP: Saw pod success Feb 24 00:17:14.272: INFO: Pod "downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef" satisfied condition "success or failure" Feb 24 00:17:14.277: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef container client-container: STEP: delete the pod Feb 24 00:17:14.381: INFO: Waiting for pod downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef to disappear Feb 24 00:17:14.388: INFO: Pod downwardapi-volume-136cee1b-4292-4218-8d61-bb25a4e392ef no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:17:14.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8364" for this suite. • [SLOW TEST:8.302 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":97,"skipped":1685,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:17:14.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-536173ab-8229-45cf-a321-a524df81e3e4 STEP: Creating a pod to test consume configMaps Feb 24 00:17:14.556: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748" in namespace "configmap-5590" to be "success or failure" Feb 24 00:17:14.727: INFO: Pod "pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748": Phase="Pending", Reason="", readiness=false. Elapsed: 170.72631ms Feb 24 00:17:16.735: INFO: Pod "pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178677843s Feb 24 00:17:18.742: INFO: Pod "pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185516989s Feb 24 00:17:20.787: INFO: Pod "pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748": Phase="Pending", Reason="", readiness=false. Elapsed: 6.231012822s Feb 24 00:17:22.794: INFO: Pod "pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748": Phase="Pending", Reason="", readiness=false. Elapsed: 8.237945643s Feb 24 00:17:24.800: INFO: Pod "pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.243864888s STEP: Saw pod success Feb 24 00:17:24.800: INFO: Pod "pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748" satisfied condition "success or failure" Feb 24 00:17:24.803: INFO: Trying to get logs from node jerma-node pod pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748 container configmap-volume-test: STEP: delete the pod Feb 24 00:17:26.540: INFO: Waiting for pod pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748 to disappear Feb 24 00:17:26.645: INFO: Pod pod-configmaps-6ddb01a4-c9a0-4814-9bc6-3f416a3ea748 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:17:26.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5590" for this suite. • [SLOW TEST:12.369 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":98,"skipped":1701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:17:26.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 24 00:17:27.075: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 24 00:17:32.102: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 24 00:17:34.557: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 24 00:17:36.570: INFO: Creating deployment "test-rollover-deployment" Feb 24 00:17:36.624: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 24 00:17:38.651: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 24 00:17:38.661: INFO: Ensure that both replica sets have 1 created replica Feb 24 00:17:38.667: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 24 00:17:38.677: INFO: Updating deployment test-rollover-deployment Feb 24 00:17:38.677: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 24 00:17:40.707: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 24 00:17:40.719: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 24 00:17:40.726: INFO: all replica sets need to contain the pod-template-hash label Feb 24 00:17:40.726: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100259, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:17:42.739: INFO: all replica sets need to contain the pod-template-hash label Feb 24 00:17:42.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100259, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:17:44.756: INFO: all replica sets need to contain the pod-template-hash label Feb 24 00:17:44.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100259, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:17:46.740: INFO: all replica sets need to contain the pod-template-hash label Feb 24 00:17:46.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100265, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:17:48.739: INFO: all replica sets need to contain the pod-template-hash label Feb 24 00:17:48.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100265, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:17:50.743: INFO: all replica sets need to contain the pod-template-hash label Feb 24 00:17:50.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100265, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:17:52.735: INFO: all replica sets need to contain the pod-template-hash label Feb 24 00:17:52.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100265, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:17:54.739: INFO: all replica sets need to contain the pod-template-hash label Feb 24 00:17:54.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100265, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100256, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:17:56.745: INFO: Feb 24 00:17:56.745: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 24 00:17:56.760: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9330 /apis/apps/v1/namespaces/deployment-9330/deployments/test-rollover-deployment 723e70e5-b52f-4ca7-a7a3-a2457d5a018e 10326133 2 2020-02-24 00:17:36 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034637e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-24 00:17:36 +0000 UTC,LastTransitionTime:2020-02-24 00:17:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-24 00:17:56 +0000 UTC,LastTransitionTime:2020-02-24 00:17:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 24 00:17:56.766: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-9330 /apis/apps/v1/namespaces/deployment-9330/replicasets/test-rollover-deployment-574d6dfbff 4b23c3b5-3b21-423e-85d4-f87431cb595c 10326122 2 2020-02-24 00:17:38 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 723e70e5-b52f-4ca7-a7a3-a2457d5a018e 0xc003463c67 0xc003463c68}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003463cd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 24 00:17:56.766: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 24 00:17:56.767: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9330 /apis/apps/v1/namespaces/deployment-9330/replicasets/test-rollover-controller a99ab406-e2ee-4fc1-b226-b454308eb601 10326131 2 2020-02-24 00:17:27 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 723e70e5-b52f-4ca7-a7a3-a2457d5a018e 0xc003463b97 0xc003463b98}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003463bf8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 24 00:17:56.767: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9330 /apis/apps/v1/namespaces/deployment-9330/replicasets/test-rollover-deployment-f6c94f66c ce99b653-e1d4-49f7-93c3-585b99156958 10326076 2 2020-02-24 00:17:36 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 723e70e5-b52f-4ca7-a7a3-a2457d5a018e 0xc003463d40 0xc003463d41}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003463db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 24 00:17:56.772: INFO: Pod "test-rollover-deployment-574d6dfbff-k2mtq" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-k2mtq test-rollover-deployment-574d6dfbff- deployment-9330 /api/v1/namespaces/deployment-9330/pods/test-rollover-deployment-574d6dfbff-k2mtq f845cf1e-d1bc-49e3-a8e1-16893e0b8c32 10326096 0 2020-02-24 00:17:38 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 4b23c3b5-3b21-423e-85d4-f87431cb595c 0xc0049082d7 0xc0049082d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dk89l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dk89l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dk89l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 00:17:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 00:17:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 00:17:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 00:17:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-24 00:17:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 00:17:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://7a93c63b48c2b914d2273c5c5828fed70ec734198e7919b38eb0ef43e72b12bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:17:56.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9330" for this suite. • [SLOW TEST:30.018 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":99,"skipped":1736,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:17:56.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 24 00:18:17.045: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 24 00:18:17.075: INFO: Pod pod-with-prestop-exec-hook still exists Feb 24 00:18:19.075: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 24 00:18:19.079: INFO: Pod pod-with-prestop-exec-hook still exists Feb 24 00:18:21.075: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 24 00:18:21.081: INFO: Pod pod-with-prestop-exec-hook still exists Feb 24 00:18:23.075: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 24 00:18:23.080: INFO: Pod pod-with-prestop-exec-hook still exists Feb 24 00:18:25.075: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 24 00:18:25.103: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:18:25.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4420" for this suite. • [SLOW TEST:28.428 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":100,"skipped":1739,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:18:25.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on node default medium Feb 24 00:18:25.399: INFO: Waiting up to 5m0s for pod "pod-b636099e-e1e7-4450-a514-37f333fdb942" in namespace "emptydir-3753" to be "success or failure" Feb 24 00:18:25.415: INFO: Pod "pod-b636099e-e1e7-4450-a514-37f333fdb942": Phase="Pending", Reason="", readiness=false. Elapsed: 14.733406ms Feb 24 00:18:27.425: INFO: Pod "pod-b636099e-e1e7-4450-a514-37f333fdb942": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025598687s Feb 24 00:18:29.430: INFO: Pod "pod-b636099e-e1e7-4450-a514-37f333fdb942": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029920585s Feb 24 00:18:31.436: INFO: Pod "pod-b636099e-e1e7-4450-a514-37f333fdb942": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036149499s Feb 24 00:18:33.441: INFO: Pod "pod-b636099e-e1e7-4450-a514-37f333fdb942": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041461247s Feb 24 00:18:35.449: INFO: Pod "pod-b636099e-e1e7-4450-a514-37f333fdb942": Phase="Pending", Reason="", readiness=false. Elapsed: 10.048921273s Feb 24 00:18:37.889: INFO: Pod "pod-b636099e-e1e7-4450-a514-37f333fdb942": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.489348477s STEP: Saw pod success Feb 24 00:18:37.889: INFO: Pod "pod-b636099e-e1e7-4450-a514-37f333fdb942" satisfied condition "success or failure" Feb 24 00:18:37.898: INFO: Trying to get logs from node jerma-node pod pod-b636099e-e1e7-4450-a514-37f333fdb942 container test-container: STEP: delete the pod Feb 24 00:18:38.065: INFO: Waiting for pod pod-b636099e-e1e7-4450-a514-37f333fdb942 to disappear Feb 24 00:18:38.073: INFO: Pod pod-b636099e-e1e7-4450-a514-37f333fdb942 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:18:38.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3753" for this suite. • [SLOW TEST:12.870 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":101,"skipped":1740,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:18:38.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-8vjz8 in namespace proxy-3160 I0224 00:18:38.260430 10 runners.go:189] Created replication controller with name: proxy-service-8vjz8, namespace: proxy-3160, replica count: 1 I0224 00:18:39.312343 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:18:40.312908 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:18:41.313301 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:18:42.313796 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:18:43.314205 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:18:44.323574 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0224 00:18:45.324293 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0224 00:18:46.325062 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0224 00:18:47.325665 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0224 00:18:48.326173 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0224 00:18:49.326835 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0224 00:18:50.327932 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0224 00:18:51.328744 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0224 00:18:52.329643 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0224 00:18:53.330540 10 runners.go:189] proxy-service-8vjz8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 24 00:18:53.372: INFO: setup took 15.165383817s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 24 00:18:53.409: INFO: (0) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 36.232612ms) Feb 24 00:18:53.409: INFO: (0) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 35.796824ms) Feb 24 00:18:53.410: INFO: (0) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 37.20621ms) Feb 24 00:18:53.412: INFO: (0) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 38.75112ms) Feb 24 00:18:53.416: INFO: (0) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 42.705513ms) Feb 24 00:18:53.417: INFO: (0) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 43.289192ms) Feb 24 00:18:53.417: INFO: (0) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 43.148182ms) Feb 24 00:18:53.425: INFO: (0) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 52.061399ms) Feb 24 00:18:53.426: INFO: (0) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 53.52281ms) Feb 24 00:18:53.427: INFO: (0) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 54.001228ms) Feb 24 00:18:53.428: INFO: (0) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 53.962771ms) Feb 24 00:18:53.429: INFO: (0) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 55.060983ms) Feb 24 00:18:53.429: INFO: (0) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 55.939182ms) Feb 24 00:18:53.441: INFO: (0) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test (200; 15.613478ms) Feb 24 00:18:53.459: INFO: (1) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 16.16251ms) Feb 24 00:18:53.459: INFO: (1) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test<... (200; 18.822202ms) Feb 24 00:18:53.462: INFO: (1) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 19.624318ms) Feb 24 00:18:53.462: INFO: (1) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 19.486606ms) Feb 24 00:18:53.463: INFO: (1) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 20.000735ms) Feb 24 00:18:53.463: INFO: (1) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 20.335893ms) Feb 24 00:18:53.466: INFO: (1) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 24.205113ms) Feb 24 00:18:53.466: INFO: (1) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 23.291849ms) Feb 24 00:18:53.467: INFO: (1) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 24.599664ms) Feb 24 00:18:53.482: INFO: (2) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 14.644126ms) Feb 24 00:18:53.483: INFO: (2) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 15.082467ms) Feb 24 00:18:53.483: INFO: (2) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 15.294962ms) Feb 24 00:18:53.483: INFO: (2) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 15.19746ms) Feb 24 00:18:53.483: INFO: (2) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test<... (200; 16.997245ms) Feb 24 00:18:53.484: INFO: (2) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 16.764704ms) Feb 24 00:18:53.486: INFO: (2) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 19.129048ms) Feb 24 00:18:53.486: INFO: (2) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 18.92893ms) Feb 24 00:18:53.486: INFO: (2) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 19.17184ms) Feb 24 00:18:53.486: INFO: (2) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 19.200483ms) Feb 24 00:18:53.489: INFO: (2) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 22.676764ms) Feb 24 00:18:53.497: INFO: (3) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 7.291939ms) Feb 24 00:18:53.497: INFO: (3) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 7.556589ms) Feb 24 00:18:53.499: INFO: (3) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 9.720792ms) Feb 24 00:18:53.510: INFO: (3) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 19.567866ms) Feb 24 00:18:53.510: INFO: (3) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 20.096933ms) Feb 24 00:18:53.510: INFO: (3) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 20.200567ms) Feb 24 00:18:53.511: INFO: (3) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 21.039164ms) Feb 24 00:18:53.512: INFO: (3) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 22.119641ms) Feb 24 00:18:53.513: INFO: (3) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 22.966453ms) Feb 24 00:18:53.520: INFO: (3) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: ... (200; 29.498736ms) Feb 24 00:18:53.522: INFO: (3) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 31.693985ms) Feb 24 00:18:53.525: INFO: (3) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 34.350452ms) Feb 24 00:18:53.525: INFO: (3) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 34.852939ms) Feb 24 00:18:53.526: INFO: (3) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 35.955517ms) Feb 24 00:18:53.526: INFO: (3) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 35.931169ms) Feb 24 00:18:53.532: INFO: (4) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 6.073509ms) Feb 24 00:18:53.533: INFO: (4) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 6.130381ms) Feb 24 00:18:53.537: INFO: (4) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 10.076782ms) Feb 24 00:18:53.537: INFO: (4) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 10.934778ms) Feb 24 00:18:53.538: INFO: (4) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 11.263412ms) Feb 24 00:18:53.538: INFO: (4) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 11.312451ms) Feb 24 00:18:53.539: INFO: (4) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 12.281175ms) Feb 24 00:18:53.539: INFO: (4) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 12.521505ms) Feb 24 00:18:53.540: INFO: (4) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 13.168127ms) Feb 24 00:18:53.540: INFO: (4) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 13.053508ms) Feb 24 00:18:53.541: INFO: (4) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 14.733535ms) Feb 24 00:18:53.542: INFO: (4) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 15.09773ms) Feb 24 00:18:53.542: INFO: (4) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 15.175493ms) Feb 24 00:18:53.542: INFO: (4) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 15.286125ms) Feb 24 00:18:53.542: INFO: (4) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: ... (200; 4.457455ms) Feb 24 00:18:53.550: INFO: (5) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 7.392482ms) Feb 24 00:18:53.550: INFO: (5) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 7.755396ms) Feb 24 00:18:53.551: INFO: (5) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 8.237755ms) Feb 24 00:18:53.551: INFO: (5) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test<... (200; 8.753105ms) Feb 24 00:18:53.552: INFO: (5) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 9.302111ms) Feb 24 00:18:53.552: INFO: (5) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 9.530199ms) Feb 24 00:18:53.553: INFO: (5) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 10.2682ms) Feb 24 00:18:53.554: INFO: (5) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 10.997864ms) Feb 24 00:18:53.554: INFO: (5) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 10.980524ms) Feb 24 00:18:53.554: INFO: (5) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 11.143634ms) Feb 24 00:18:53.554: INFO: (5) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 11.230697ms) Feb 24 00:18:53.554: INFO: (5) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 11.309881ms) Feb 24 00:18:53.560: INFO: (6) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 5.980639ms) Feb 24 00:18:53.561: INFO: (6) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 6.691316ms) Feb 24 00:18:53.562: INFO: (6) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 7.95147ms) Feb 24 00:18:53.562: INFO: (6) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test<... (200; 13.76073ms) Feb 24 00:18:53.568: INFO: (6) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 14.02116ms) Feb 24 00:18:53.568: INFO: (6) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 13.771266ms) Feb 24 00:18:53.568: INFO: (6) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 13.730941ms) Feb 24 00:18:53.568: INFO: (6) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 14.363095ms) Feb 24 00:18:53.568: INFO: (6) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 13.829472ms) Feb 24 00:18:53.568: INFO: (6) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 14.063325ms) Feb 24 00:18:53.574: INFO: (7) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 5.257062ms) Feb 24 00:18:53.574: INFO: (7) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test<... (200; 13.750261ms) Feb 24 00:18:53.582: INFO: (7) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 13.822177ms) Feb 24 00:18:53.583: INFO: (7) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 14.066141ms) Feb 24 00:18:53.583: INFO: (7) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 14.136347ms) Feb 24 00:18:53.583: INFO: (7) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 13.900693ms) Feb 24 00:18:53.583: INFO: (7) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 14.206182ms) Feb 24 00:18:53.583: INFO: (7) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 15.019147ms) Feb 24 00:18:53.583: INFO: (7) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 14.906821ms) Feb 24 00:18:53.583: INFO: (7) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 15.020831ms) Feb 24 00:18:53.583: INFO: (7) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 14.849889ms) Feb 24 00:18:53.583: INFO: (7) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 15.064414ms) Feb 24 00:18:53.584: INFO: (7) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 14.927489ms) Feb 24 00:18:53.584: INFO: (7) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 15.020132ms) Feb 24 00:18:53.585: INFO: (7) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 16.406216ms) Feb 24 00:18:53.596: INFO: (8) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 10.547963ms) Feb 24 00:18:53.596: INFO: (8) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 10.49065ms) Feb 24 00:18:53.596: INFO: (8) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 10.616484ms) Feb 24 00:18:53.597: INFO: (8) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 11.564908ms) Feb 24 00:18:53.598: INFO: (8) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 11.937175ms) Feb 24 00:18:53.598: INFO: (8) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 12.563186ms) Feb 24 00:18:53.598: INFO: (8) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 12.288341ms) Feb 24 00:18:53.598: INFO: (8) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test (200; 13.781924ms) Feb 24 00:18:53.617: INFO: (9) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 14.243706ms) Feb 24 00:18:53.617: INFO: (9) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 13.96843ms) Feb 24 00:18:53.618: INFO: (9) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 15.062344ms) Feb 24 00:18:53.619: INFO: (9) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 15.613087ms) Feb 24 00:18:53.619: INFO: (9) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: ... (200; 17.097116ms) Feb 24 00:18:53.619: INFO: (9) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 17.034759ms) Feb 24 00:18:53.620: INFO: (9) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 16.846589ms) Feb 24 00:18:53.620: INFO: (9) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 16.732889ms) Feb 24 00:18:53.620: INFO: (9) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 17.173304ms) Feb 24 00:18:53.620: INFO: (9) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 16.953442ms) Feb 24 00:18:53.629: INFO: (10) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: ... (200; 9.629852ms) Feb 24 00:18:53.631: INFO: (10) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 9.762234ms) Feb 24 00:18:53.631: INFO: (10) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 10.726766ms) Feb 24 00:18:53.631: INFO: (10) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 10.780905ms) Feb 24 00:18:53.631: INFO: (10) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 10.660112ms) Feb 24 00:18:53.632: INFO: (10) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 10.771402ms) Feb 24 00:18:53.632: INFO: (10) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 10.778383ms) Feb 24 00:18:53.632: INFO: (10) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 10.968917ms) Feb 24 00:18:53.632: INFO: (10) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 10.959822ms) Feb 24 00:18:53.633: INFO: (10) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 12.419349ms) Feb 24 00:18:53.637: INFO: (10) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 16.274788ms) Feb 24 00:18:53.638: INFO: (10) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 17.046912ms) Feb 24 00:18:53.638: INFO: (10) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 17.485229ms) Feb 24 00:18:53.638: INFO: (10) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 17.931831ms) Feb 24 00:18:53.641: INFO: (10) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 20.276689ms) Feb 24 00:18:53.654: INFO: (11) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 12.158347ms) Feb 24 00:18:53.654: INFO: (11) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 12.386772ms) Feb 24 00:18:53.654: INFO: (11) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 12.609388ms) Feb 24 00:18:53.657: INFO: (11) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 14.968724ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 16.033027ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 15.504961ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 15.068599ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 16.013215ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 14.8454ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 15.143347ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 15.698833ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 16.834993ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test (200; 15.883915ms) Feb 24 00:18:53.658: INFO: (11) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 15.552642ms) Feb 24 00:18:53.659: INFO: (11) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 18.615259ms) Feb 24 00:18:53.675: INFO: (12) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 14.56046ms) Feb 24 00:18:53.675: INFO: (12) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 15.251014ms) Feb 24 00:18:53.675: INFO: (12) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: ... (200; 14.558747ms) Feb 24 00:18:53.676: INFO: (12) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 15.588307ms) Feb 24 00:18:53.676: INFO: (12) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 14.812759ms) Feb 24 00:18:53.677: INFO: (12) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 16.240623ms) Feb 24 00:18:53.677: INFO: (12) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 15.794387ms) Feb 24 00:18:53.677: INFO: (12) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 15.719222ms) Feb 24 00:18:53.677: INFO: (12) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 16.590603ms) Feb 24 00:18:53.677: INFO: (12) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 15.935663ms) Feb 24 00:18:53.678: INFO: (12) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 17.086066ms) Feb 24 00:18:53.695: INFO: (13) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 16.699632ms) Feb 24 00:18:53.695: INFO: (13) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 16.441012ms) Feb 24 00:18:53.695: INFO: (13) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 16.480757ms) Feb 24 00:18:53.695: INFO: (13) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 16.688023ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 24.501759ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 24.573974ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 24.653597ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test<... (200; 25.203791ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 24.901646ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 25.323466ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 24.870593ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 25.015232ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 25.07535ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 25.089537ms) Feb 24 00:18:53.703: INFO: (13) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 25.462376ms) Feb 24 00:18:53.737: INFO: (14) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 33.19791ms) Feb 24 00:18:53.737: INFO: (14) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 33.403802ms) Feb 24 00:18:53.737: INFO: (14) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: ... (200; 33.998951ms) Feb 24 00:18:53.738: INFO: (14) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 33.761102ms) Feb 24 00:18:53.738: INFO: (14) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 34.05101ms) Feb 24 00:18:53.738: INFO: (14) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 34.331842ms) Feb 24 00:18:53.739: INFO: (14) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 34.917363ms) Feb 24 00:18:53.739: INFO: (14) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 35.099937ms) Feb 24 00:18:53.739: INFO: (14) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 35.632479ms) Feb 24 00:18:53.740: INFO: (14) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 35.959785ms) Feb 24 00:18:53.740: INFO: (14) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 35.671079ms) Feb 24 00:18:53.740: INFO: (14) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 35.868787ms) Feb 24 00:18:53.750: INFO: (15) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 9.01392ms) Feb 24 00:18:53.750: INFO: (15) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test (200; 20.999052ms) Feb 24 00:18:53.762: INFO: (15) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 21.100634ms) Feb 24 00:18:53.762: INFO: (15) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 21.325565ms) Feb 24 00:18:53.769: INFO: (15) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 28.866867ms) Feb 24 00:18:53.770: INFO: (15) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 29.28761ms) Feb 24 00:18:53.770: INFO: (15) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 29.860212ms) Feb 24 00:18:53.770: INFO: (15) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 29.57935ms) Feb 24 00:18:53.770: INFO: (15) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 29.987752ms) Feb 24 00:18:53.770: INFO: (15) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 29.750316ms) Feb 24 00:18:53.773: INFO: (15) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 32.585072ms) Feb 24 00:18:53.773: INFO: (15) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 32.952045ms) Feb 24 00:18:53.776: INFO: (15) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 35.935263ms) Feb 24 00:18:53.789: INFO: (16) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 12.411781ms) Feb 24 00:18:53.792: INFO: (16) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test (200; 18.506718ms) Feb 24 00:18:53.796: INFO: (16) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 18.328433ms) Feb 24 00:18:53.796: INFO: (16) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 18.64446ms) Feb 24 00:18:53.797: INFO: (16) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 19.579167ms) Feb 24 00:18:53.797: INFO: (16) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 19.855625ms) Feb 24 00:18:53.798: INFO: (16) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 20.952434ms) Feb 24 00:18:53.799: INFO: (16) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 21.243339ms) Feb 24 00:18:53.799: INFO: (16) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 21.867871ms) Feb 24 00:18:53.799: INFO: (16) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 22.845456ms) Feb 24 00:18:53.800: INFO: (16) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 22.372161ms) Feb 24 00:18:53.800: INFO: (16) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 22.277076ms) Feb 24 00:18:53.806: INFO: (17) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 5.766147ms) Feb 24 00:18:53.806: INFO: (17) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 5.732785ms) Feb 24 00:18:53.809: INFO: (17) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 8.863846ms) Feb 24 00:18:53.809: INFO: (17) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 9.157081ms) Feb 24 00:18:53.810: INFO: (17) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 9.7429ms) Feb 24 00:18:53.810: INFO: (17) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 10.037978ms) Feb 24 00:18:53.811: INFO: (17) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 10.478095ms) Feb 24 00:18:53.811: INFO: (17) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 11.279642ms) Feb 24 00:18:53.816: INFO: (17) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 16.05489ms) Feb 24 00:18:53.817: INFO: (17) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 16.576592ms) Feb 24 00:18:53.818: INFO: (17) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 17.555363ms) Feb 24 00:18:53.818: INFO: (17) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 17.753149ms) Feb 24 00:18:53.818: INFO: (17) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test<... (200; 11.321342ms) Feb 24 00:18:53.831: INFO: (18) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:460/proxy/: tls baz (200; 13.010766ms) Feb 24 00:18:53.831: INFO: (18) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: test (200; 18.398955ms) Feb 24 00:18:53.837: INFO: (18) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 17.981364ms) Feb 24 00:18:53.838: INFO: (18) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname2/proxy/: tls qux (200; 19.2632ms) Feb 24 00:18:53.838: INFO: (18) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 18.89166ms) Feb 24 00:18:53.838: INFO: (18) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 19.255938ms) Feb 24 00:18:53.838: INFO: (18) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:1080/proxy/: ... (200; 19.713596ms) Feb 24 00:18:53.838: INFO: (18) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:462/proxy/: tls qux (200; 19.737742ms) Feb 24 00:18:53.838: INFO: (18) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 20.097842ms) Feb 24 00:18:53.839: INFO: (18) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname2/proxy/: bar (200; 20.463175ms) Feb 24 00:18:53.839: INFO: (18) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 20.083267ms) Feb 24 00:18:53.839: INFO: (18) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 20.246416ms) Feb 24 00:18:53.847: INFO: (19) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:1080/proxy/: test<... (200; 8.187614ms) Feb 24 00:18:53.851: INFO: (19) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname1/proxy/: foo (200; 11.551204ms) Feb 24 00:18:53.854: INFO: (19) /api/v1/namespaces/proxy-3160/services/http:proxy-service-8vjz8:portname1/proxy/: foo (200; 14.725836ms) Feb 24 00:18:53.854: INFO: (19) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp/proxy/: test (200; 14.627338ms) Feb 24 00:18:53.854: INFO: (19) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:160/proxy/: foo (200; 15.082601ms) Feb 24 00:18:53.855: INFO: (19) /api/v1/namespaces/proxy-3160/pods/http:proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 15.3793ms) Feb 24 00:18:53.856: INFO: (19) /api/v1/namespaces/proxy-3160/services/proxy-service-8vjz8:portname2/proxy/: bar (200; 16.42699ms) Feb 24 00:18:53.856: INFO: (19) /api/v1/namespaces/proxy-3160/services/https:proxy-service-8vjz8:tlsportname1/proxy/: tls baz (200; 16.787499ms) Feb 24 00:18:53.856: INFO: (19) /api/v1/namespaces/proxy-3160/pods/https:proxy-service-8vjz8-scwjp:443/proxy/: ... (200; 18.779071ms) Feb 24 00:18:53.859: INFO: (19) /api/v1/namespaces/proxy-3160/pods/proxy-service-8vjz8-scwjp:162/proxy/: bar (200; 19.834941ms) STEP: deleting ReplicationController proxy-service-8vjz8 in namespace proxy-3160, will wait for the garbage collector to delete the pods Feb 24 00:18:53.925: INFO: Deleting ReplicationController proxy-service-8vjz8 took: 10.141522ms Feb 24 00:18:54.226: INFO: Terminating ReplicationController proxy-service-8vjz8 pods took: 300.918423ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:18:59.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3160" for this suite. • [SLOW TEST:20.955 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":280,"completed":102,"skipped":1748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:18:59.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 24 00:18:59.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a" in namespace "downward-api-992" to be "success or failure" Feb 24 00:18:59.304: INFO: Pod "downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a": Phase="Pending", Reason="", readiness=false. Elapsed: 91.251557ms Feb 24 00:19:01.311: INFO: Pod "downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098804774s Feb 24 00:19:03.320: INFO: Pod "downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107380814s Feb 24 00:19:05.349: INFO: Pod "downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136630849s Feb 24 00:19:07.360: INFO: Pod "downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147854893s Feb 24 00:19:09.371: INFO: Pod "downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.158156622s STEP: Saw pod success Feb 24 00:19:09.371: INFO: Pod "downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a" satisfied condition "success or failure" Feb 24 00:19:09.375: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a container client-container: STEP: delete the pod Feb 24 00:19:09.506: INFO: Waiting for pod downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a to disappear Feb 24 00:19:09.519: INFO: Pod downwardapi-volume-d16c3a71-63fd-442e-aeec-e4479666ab5a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:19:09.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-992" for this suite. • [SLOW TEST:10.487 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":103,"skipped":1798,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:19:09.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-63c2b347-1b17-403c-976b-21c7368ba195 STEP: Creating a pod to test consume configMaps Feb 24 00:19:09.842: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2" in namespace "configmap-4232" to be "success or failure" Feb 24 00:19:09.890: INFO: Pod "pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2": Phase="Pending", Reason="", readiness=false. Elapsed: 47.015841ms Feb 24 00:19:11.901: INFO: Pod "pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058497045s Feb 24 00:19:13.919: INFO: Pod "pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076195487s Feb 24 00:19:15.942: INFO: Pod "pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099287669s Feb 24 00:19:17.955: INFO: Pod "pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112485328s STEP: Saw pod success Feb 24 00:19:17.955: INFO: Pod "pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2" satisfied condition "success or failure" Feb 24 00:19:17.961: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2 container configmap-volume-test: STEP: delete the pod Feb 24 00:19:18.086: INFO: Waiting for pod pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2 to disappear Feb 24 00:19:18.130: INFO: Pod pod-configmaps-7ee8f820-f213-426e-8546-38feffa327c2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:19:18.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4232" for this suite. • [SLOW TEST:8.619 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":104,"skipped":1798,"failed":0} [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:19:18.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 24 00:19:18.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb" in namespace "downward-api-3294" to be "success or failure" Feb 24 00:19:18.562: INFO: Pod "downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb": Phase="Pending", Reason="", readiness=false. Elapsed: 105.941919ms Feb 24 00:19:20.574: INFO: Pod "downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118746241s Feb 24 00:19:22.585: INFO: Pod "downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129236263s Feb 24 00:19:24.594: INFO: Pod "downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137964113s Feb 24 00:19:26.605: INFO: Pod "downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.149631129s STEP: Saw pod success Feb 24 00:19:26.606: INFO: Pod "downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb" satisfied condition "success or failure" Feb 24 00:19:26.613: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb container client-container: STEP: delete the pod Feb 24 00:19:26.702: INFO: Waiting for pod downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb to disappear Feb 24 00:19:26.733: INFO: Pod downwardapi-volume-2ce92037-e72f-4b9e-93f6-86e8c417eabb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:19:26.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3294" for this suite. • [SLOW TEST:8.592 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":105,"skipped":1798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:19:26.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7052 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7052 I0224 00:19:27.063071 10 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7052, replica count: 2 I0224 00:19:30.114428 10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:19:33.115081 10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:19:36.115535 10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:19:39.118744 10 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:19:42.119320 10 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 24 00:19:42.119: INFO: Creating new exec pod Feb 24 00:19:51.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7052 execpodw2597 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 24 00:19:53.403: INFO: stderr: "I0224 00:19:53.194775 1282 log.go:172] (0xc000115810) (0xc000653e00) Create stream\nI0224 00:19:53.194986 1282 log.go:172] (0xc000115810) (0xc000653e00) Stream added, broadcasting: 1\nI0224 00:19:53.198446 1282 log.go:172] (0xc000115810) Reply frame received for 1\nI0224 00:19:53.198488 1282 log.go:172] (0xc000115810) (0xc0006286e0) Create stream\nI0224 00:19:53.198494 1282 log.go:172] (0xc000115810) (0xc0006286e0) Stream added, broadcasting: 3\nI0224 00:19:53.200397 1282 log.go:172] (0xc000115810) Reply frame received for 3\nI0224 00:19:53.200569 1282 log.go:172] (0xc000115810) (0xc000653ea0) Create stream\nI0224 00:19:53.200603 1282 log.go:172] (0xc000115810) (0xc000653ea0) Stream added, broadcasting: 5\nI0224 00:19:53.203605 1282 log.go:172] (0xc000115810) Reply frame received for 5\nI0224 00:19:53.288697 1282 log.go:172] (0xc000115810) Data frame received for 5\nI0224 00:19:53.288822 1282 log.go:172] (0xc000653ea0) (5) Data frame handling\nI0224 00:19:53.288855 1282 log.go:172] (0xc000653ea0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0224 00:19:53.293057 1282 log.go:172] (0xc000115810) Data frame received for 5\nI0224 00:19:53.293081 1282 log.go:172] (0xc000653ea0) (5) Data frame handling\nI0224 00:19:53.293096 1282 log.go:172] (0xc000653ea0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0224 00:19:53.395798 1282 log.go:172] (0xc000115810) Data frame received for 1\nI0224 00:19:53.395874 1282 log.go:172] (0xc000115810) (0xc000653ea0) Stream removed, broadcasting: 5\nI0224 00:19:53.395971 1282 log.go:172] (0xc000653e00) (1) Data frame handling\nI0224 00:19:53.395988 1282 log.go:172] (0xc000653e00) (1) Data frame sent\nI0224 00:19:53.396007 1282 log.go:172] (0xc000115810) (0xc0006286e0) Stream removed, broadcasting: 3\nI0224 00:19:53.396028 1282 log.go:172] (0xc000115810) (0xc000653e00) Stream removed, broadcasting: 1\nI0224 00:19:53.396045 1282 log.go:172] (0xc000115810) Go away received\nI0224 00:19:53.396401 1282 log.go:172] (0xc000115810) (0xc000653e00) Stream removed, broadcasting: 1\nI0224 00:19:53.396411 1282 log.go:172] (0xc000115810) (0xc0006286e0) Stream removed, broadcasting: 3\nI0224 00:19:53.396415 1282 log.go:172] (0xc000115810) (0xc000653ea0) Stream removed, broadcasting: 5\n" Feb 24 00:19:53.403: INFO: stdout: "" Feb 24 00:19:53.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7052 execpodw2597 -- /bin/sh -x -c nc -zv -t -w 2 10.96.89.14 80' Feb 24 00:19:53.722: INFO: stderr: "I0224 00:19:53.553513 1310 log.go:172] (0xc0003c6fd0) (0xc0006bdf40) Create stream\nI0224 00:19:53.553687 1310 log.go:172] (0xc0003c6fd0) (0xc0006bdf40) Stream added, broadcasting: 1\nI0224 00:19:53.556280 1310 log.go:172] (0xc0003c6fd0) Reply frame received for 1\nI0224 00:19:53.556314 1310 log.go:172] (0xc0003c6fd0) (0xc000664820) Create stream\nI0224 00:19:53.556321 1310 log.go:172] (0xc0003c6fd0) (0xc000664820) Stream added, broadcasting: 3\nI0224 00:19:53.557210 1310 log.go:172] (0xc0003c6fd0) Reply frame received for 3\nI0224 00:19:53.557228 1310 log.go:172] (0xc0003c6fd0) (0xc0004454a0) Create stream\nI0224 00:19:53.557233 1310 log.go:172] (0xc0003c6fd0) (0xc0004454a0) Stream added, broadcasting: 5\nI0224 00:19:53.558449 1310 log.go:172] (0xc0003c6fd0) Reply frame received for 5\nI0224 00:19:53.629580 1310 log.go:172] (0xc0003c6fd0) Data frame received for 5\nI0224 00:19:53.629646 1310 log.go:172] (0xc0004454a0) (5) Data frame handling\nI0224 00:19:53.629674 1310 log.go:172] (0xc0004454a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.89.14 80\nI0224 00:19:53.635346 1310 log.go:172] (0xc0003c6fd0) Data frame received for 5\nI0224 00:19:53.635486 1310 log.go:172] (0xc0004454a0) (5) Data frame handling\nI0224 00:19:53.635511 1310 log.go:172] (0xc0004454a0) (5) Data frame sent\nConnection to 10.96.89.14 80 port [tcp/http] succeeded!\nI0224 00:19:53.712478 1310 log.go:172] (0xc0003c6fd0) Data frame received for 1\nI0224 00:19:53.712624 1310 log.go:172] (0xc0003c6fd0) (0xc000664820) Stream removed, broadcasting: 3\nI0224 00:19:53.712734 1310 log.go:172] (0xc0006bdf40) (1) Data frame handling\nI0224 00:19:53.712764 1310 log.go:172] (0xc0006bdf40) (1) Data frame sent\nI0224 00:19:53.712802 1310 log.go:172] (0xc0003c6fd0) (0xc0004454a0) Stream removed, broadcasting: 5\nI0224 00:19:53.712831 1310 log.go:172] (0xc0003c6fd0) (0xc0006bdf40) Stream removed, broadcasting: 1\nI0224 00:19:53.712846 1310 log.go:172] (0xc0003c6fd0) Go away received\nI0224 00:19:53.713774 1310 log.go:172] (0xc0003c6fd0) (0xc0006bdf40) Stream removed, broadcasting: 1\nI0224 00:19:53.713784 1310 log.go:172] (0xc0003c6fd0) (0xc000664820) Stream removed, broadcasting: 3\nI0224 00:19:53.713787 1310 log.go:172] (0xc0003c6fd0) (0xc0004454a0) Stream removed, broadcasting: 5\n" Feb 24 00:19:53.722: INFO: stdout: "" Feb 24 00:19:53.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7052 execpodw2597 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30130' Feb 24 00:19:54.099: INFO: stderr: "I0224 00:19:53.850319 1330 log.go:172] (0xc000b14210) (0xc000bbc780) Create stream\nI0224 00:19:53.850483 1330 log.go:172] (0xc000b14210) (0xc000bbc780) Stream added, broadcasting: 1\nI0224 00:19:53.855387 1330 log.go:172] (0xc000b14210) Reply frame received for 1\nI0224 00:19:53.855476 1330 log.go:172] (0xc000b14210) (0xc000bbc820) Create stream\nI0224 00:19:53.855493 1330 log.go:172] (0xc000b14210) (0xc000bbc820) Stream added, broadcasting: 3\nI0224 00:19:53.857530 1330 log.go:172] (0xc000b14210) Reply frame received for 3\nI0224 00:19:53.857560 1330 log.go:172] (0xc000b14210) (0xc000b0a000) Create stream\nI0224 00:19:53.857572 1330 log.go:172] (0xc000b14210) (0xc000b0a000) Stream added, broadcasting: 5\nI0224 00:19:53.859126 1330 log.go:172] (0xc000b14210) Reply frame received for 5\nI0224 00:19:53.943127 1330 log.go:172] (0xc000b14210) Data frame received for 5\nI0224 00:19:53.943215 1330 log.go:172] (0xc000b0a000) (5) Data frame handling\nI0224 00:19:53.943266 1330 log.go:172] (0xc000b0a000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30130\nConnection to 10.96.2.250 30130 port [tcp/30130] succeeded!\nI0224 00:19:54.085048 1330 log.go:172] (0xc000b14210) (0xc000bbc820) Stream removed, broadcasting: 3\nI0224 00:19:54.085237 1330 log.go:172] (0xc000b14210) Data frame received for 1\nI0224 00:19:54.085251 1330 log.go:172] (0xc000bbc780) (1) Data frame handling\nI0224 00:19:54.085264 1330 log.go:172] (0xc000bbc780) (1) Data frame sent\nI0224 00:19:54.085272 1330 log.go:172] (0xc000b14210) (0xc000bbc780) Stream removed, broadcasting: 1\nI0224 00:19:54.086234 1330 log.go:172] (0xc000b14210) (0xc000b0a000) Stream removed, broadcasting: 5\nI0224 00:19:54.086295 1330 log.go:172] (0xc000b14210) (0xc000bbc780) Stream removed, broadcasting: 1\nI0224 00:19:54.086303 1330 log.go:172] (0xc000b14210) (0xc000bbc820) Stream removed, broadcasting: 3\nI0224 00:19:54.086316 1330 log.go:172] (0xc000b14210) (0xc000b0a000) Stream removed, broadcasting: 5\n" Feb 24 00:19:54.099: INFO: stdout: "" Feb 24 00:19:54.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7052 execpodw2597 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30130' Feb 24 00:19:54.407: INFO: stderr: "I0224 00:19:54.242143 1348 log.go:172] (0xc000a476b0) (0xc000aa0500) Create stream\nI0224 00:19:54.242431 1348 log.go:172] (0xc000a476b0) (0xc000aa0500) Stream added, broadcasting: 1\nI0224 00:19:54.252307 1348 log.go:172] (0xc000a476b0) Reply frame received for 1\nI0224 00:19:54.252360 1348 log.go:172] (0xc000a476b0) (0xc0007cfc20) Create stream\nI0224 00:19:54.252365 1348 log.go:172] (0xc000a476b0) (0xc0007cfc20) Stream added, broadcasting: 3\nI0224 00:19:54.253241 1348 log.go:172] (0xc000a476b0) Reply frame received for 3\nI0224 00:19:54.253258 1348 log.go:172] (0xc000a476b0) (0xc0007cfcc0) Create stream\nI0224 00:19:54.253264 1348 log.go:172] (0xc000a476b0) (0xc0007cfcc0) Stream added, broadcasting: 5\nI0224 00:19:54.254177 1348 log.go:172] (0xc000a476b0) Reply frame received for 5\nI0224 00:19:54.338022 1348 log.go:172] (0xc000a476b0) Data frame received for 5\nI0224 00:19:54.338089 1348 log.go:172] (0xc0007cfcc0) (5) Data frame handling\nI0224 00:19:54.338121 1348 log.go:172] (0xc0007cfcc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30130\nI0224 00:19:54.340422 1348 log.go:172] (0xc000a476b0) Data frame received for 5\nI0224 00:19:54.340491 1348 log.go:172] (0xc0007cfcc0) (5) Data frame handling\nI0224 00:19:54.340527 1348 log.go:172] (0xc0007cfcc0) (5) Data frame sent\nConnection to 10.96.1.234 30130 port [tcp/30130] succeeded!\nI0224 00:19:54.399039 1348 log.go:172] (0xc000a476b0) (0xc0007cfc20) Stream removed, broadcasting: 3\nI0224 00:19:54.399112 1348 log.go:172] (0xc000a476b0) Data frame received for 1\nI0224 00:19:54.399123 1348 log.go:172] (0xc000aa0500) (1) Data frame handling\nI0224 00:19:54.399132 1348 log.go:172] (0xc000aa0500) (1) Data frame sent\nI0224 00:19:54.399167 1348 log.go:172] (0xc000a476b0) (0xc000aa0500) Stream removed, broadcasting: 1\nI0224 00:19:54.399217 1348 log.go:172] (0xc000a476b0) (0xc0007cfcc0) Stream removed, broadcasting: 5\nI0224 00:19:54.399256 1348 log.go:172] (0xc000a476b0) Go away received\nI0224 00:19:54.399774 1348 log.go:172] (0xc000a476b0) (0xc000aa0500) Stream removed, broadcasting: 1\nI0224 00:19:54.399785 1348 log.go:172] (0xc000a476b0) (0xc0007cfc20) Stream removed, broadcasting: 3\nI0224 00:19:54.399792 1348 log.go:172] (0xc000a476b0) (0xc0007cfcc0) Stream removed, broadcasting: 5\n" Feb 24 00:19:54.407: INFO: stdout: "" Feb 24 00:19:54.407: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:19:54.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7052" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:27.733 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":106,"skipped":1836,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:19:54.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-8a0a4edd-015e-4dd2-91e4-f92b2c412ea7 STEP: Creating a pod to test consume configMaps Feb 24 00:19:54.562: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f" in namespace "projected-8977" to be "success or failure" Feb 24 00:19:54.566: INFO: Pod "pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.010295ms Feb 24 00:19:56.575: INFO: Pod "pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012000862s Feb 24 00:19:58.585: INFO: Pod "pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022382483s Feb 24 00:20:00.632: INFO: Pod "pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06881404s Feb 24 00:20:03.928: INFO: Pod "pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.365391391s STEP: Saw pod success Feb 24 00:20:03.928: INFO: Pod "pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f" satisfied condition "success or failure" Feb 24 00:20:03.967: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f container projected-configmap-volume-test: STEP: delete the pod Feb 24 00:20:04.473: INFO: Waiting for pod pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f to disappear Feb 24 00:20:04.494: INFO: Pod pod-projected-configmaps-6316efc4-818f-49b7-bc1b-88300fd9514f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:20:04.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8977" for this suite. • [SLOW TEST:10.033 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":107,"skipped":1842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:20:04.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9372 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Feb 24 00:20:04.774: INFO: Found 0 stateful pods, waiting for 3 Feb 24 00:20:14.788: INFO: Found 2 stateful pods, waiting for 3 Feb 24 00:20:24.788: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 24 00:20:24.788: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 24 00:20:24.788: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 24 00:20:34.792: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 24 00:20:34.792: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 24 00:20:34.792: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 24 00:20:34.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9372 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 24 00:20:35.322: INFO: stderr: "I0224 00:20:35.123925 1368 log.go:172] (0xc0008e0000) (0xc0008ce000) Create stream\nI0224 00:20:35.124130 1368 log.go:172] (0xc0008e0000) (0xc0008ce000) Stream added, broadcasting: 1\nI0224 00:20:35.127586 1368 log.go:172] (0xc0008e0000) Reply frame received for 1\nI0224 00:20:35.127632 1368 log.go:172] (0xc0008e0000) (0xc000671ea0) Create stream\nI0224 00:20:35.127646 1368 log.go:172] (0xc0008e0000) (0xc000671ea0) Stream added, broadcasting: 3\nI0224 00:20:35.129062 1368 log.go:172] (0xc0008e0000) Reply frame received for 3\nI0224 00:20:35.129097 1368 log.go:172] (0xc0008e0000) (0xc000a1e320) Create stream\nI0224 00:20:35.129102 1368 log.go:172] (0xc0008e0000) (0xc000a1e320) Stream added, broadcasting: 5\nI0224 00:20:35.130493 1368 log.go:172] (0xc0008e0000) Reply frame received for 5\nI0224 00:20:35.203129 1368 log.go:172] (0xc0008e0000) Data frame received for 5\nI0224 00:20:35.203166 1368 log.go:172] (0xc000a1e320) (5) Data frame handling\nI0224 00:20:35.203186 1368 log.go:172] (0xc000a1e320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 00:20:35.234966 1368 log.go:172] (0xc0008e0000) Data frame received for 3\nI0224 00:20:35.234990 1368 log.go:172] (0xc000671ea0) (3) Data frame handling\nI0224 00:20:35.235008 1368 log.go:172] (0xc000671ea0) (3) Data frame sent\nI0224 00:20:35.311762 1368 log.go:172] (0xc0008e0000) Data frame received for 1\nI0224 00:20:35.311777 1368 log.go:172] (0xc0008ce000) (1) Data frame handling\nI0224 00:20:35.311791 1368 log.go:172] (0xc0008ce000) (1) Data frame sent\nI0224 00:20:35.311856 1368 log.go:172] (0xc0008e0000) (0xc000a1e320) Stream removed, broadcasting: 5\nI0224 00:20:35.311916 1368 log.go:172] (0xc0008e0000) (0xc0008ce000) Stream removed, broadcasting: 1\nI0224 00:20:35.312508 1368 log.go:172] (0xc0008e0000) (0xc000671ea0) Stream removed, broadcasting: 3\nI0224 00:20:35.312534 1368 log.go:172] (0xc0008e0000) Go away received\nI0224 00:20:35.312763 1368 log.go:172] (0xc0008e0000) (0xc0008ce000) Stream removed, broadcasting: 1\nI0224 00:20:35.312778 1368 log.go:172] (0xc0008e0000) (0xc000671ea0) Stream removed, broadcasting: 3\nI0224 00:20:35.312784 1368 log.go:172] (0xc0008e0000) (0xc000a1e320) Stream removed, broadcasting: 5\n" Feb 24 00:20:35.322: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 24 00:20:35.322: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 24 00:20:45.373: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 24 00:20:55.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9372 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:20:55.777: INFO: stderr: "I0224 00:20:55.619097 1384 log.go:172] (0xc0009fae70) (0xc0009b40a0) Create stream\nI0224 00:20:55.619208 1384 log.go:172] (0xc0009fae70) (0xc0009b40a0) Stream added, broadcasting: 1\nI0224 00:20:55.621803 1384 log.go:172] (0xc0009fae70) Reply frame received for 1\nI0224 00:20:55.621836 1384 log.go:172] (0xc0009fae70) (0xc000a321e0) Create stream\nI0224 00:20:55.621850 1384 log.go:172] (0xc0009fae70) (0xc000a321e0) Stream added, broadcasting: 3\nI0224 00:20:55.622881 1384 log.go:172] (0xc0009fae70) Reply frame received for 3\nI0224 00:20:55.622907 1384 log.go:172] (0xc0009fae70) (0xc0009b4140) Create stream\nI0224 00:20:55.622919 1384 log.go:172] (0xc0009fae70) (0xc0009b4140) Stream added, broadcasting: 5\nI0224 00:20:55.623955 1384 log.go:172] (0xc0009fae70) Reply frame received for 5\nI0224 00:20:55.705358 1384 log.go:172] (0xc0009fae70) Data frame received for 3\nI0224 00:20:55.705456 1384 log.go:172] (0xc000a321e0) (3) Data frame handling\nI0224 00:20:55.705482 1384 log.go:172] (0xc000a321e0) (3) Data frame sent\nI0224 00:20:55.705792 1384 log.go:172] (0xc0009fae70) Data frame received for 5\nI0224 00:20:55.705803 1384 log.go:172] (0xc0009b4140) (5) Data frame handling\nI0224 00:20:55.705814 1384 log.go:172] (0xc0009b4140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0224 00:20:55.767645 1384 log.go:172] (0xc0009fae70) (0xc0009b4140) Stream removed, broadcasting: 5\nI0224 00:20:55.767951 1384 log.go:172] (0xc0009fae70) Data frame received for 1\nI0224 00:20:55.767978 1384 log.go:172] (0xc0009fae70) (0xc000a321e0) Stream removed, broadcasting: 3\nI0224 00:20:55.768127 1384 log.go:172] (0xc0009b40a0) (1) Data frame handling\nI0224 00:20:55.768199 1384 log.go:172] (0xc0009b40a0) (1) Data frame sent\nI0224 00:20:55.768243 1384 log.go:172] (0xc0009fae70) (0xc0009b40a0) Stream removed, broadcasting: 1\nI0224 00:20:55.768286 1384 log.go:172] (0xc0009fae70) Go away received\nI0224 00:20:55.769395 1384 log.go:172] (0xc0009fae70) (0xc0009b40a0) Stream removed, broadcasting: 1\nI0224 00:20:55.769431 1384 log.go:172] (0xc0009fae70) (0xc000a321e0) Stream removed, broadcasting: 3\nI0224 00:20:55.769448 1384 log.go:172] (0xc0009fae70) (0xc0009b4140) Stream removed, broadcasting: 5\n" Feb 24 00:20:55.777: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 24 00:20:55.777: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 24 00:21:05.809: INFO: Waiting for StatefulSet statefulset-9372/ss2 to complete update Feb 24 00:21:05.809: INFO: Waiting for Pod statefulset-9372/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 24 00:21:05.809: INFO: Waiting for Pod statefulset-9372/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 24 00:21:15.822: INFO: Waiting for StatefulSet statefulset-9372/ss2 to complete update Feb 24 00:21:15.822: INFO: Waiting for Pod statefulset-9372/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 24 00:21:15.822: INFO: Waiting for Pod statefulset-9372/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 24 00:21:25.826: INFO: Waiting for StatefulSet statefulset-9372/ss2 to complete update Feb 24 00:21:25.826: INFO: Waiting for Pod statefulset-9372/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 24 00:21:35.830: INFO: Waiting for StatefulSet statefulset-9372/ss2 to complete update Feb 24 00:21:35.830: INFO: Waiting for Pod statefulset-9372/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Feb 24 00:21:45.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9372 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 24 00:21:46.207: INFO: stderr: "I0224 00:21:46.012067 1405 log.go:172] (0xc0008ee000) (0xc00067bf40) Create stream\nI0224 00:21:46.012165 1405 log.go:172] (0xc0008ee000) (0xc00067bf40) Stream added, broadcasting: 1\nI0224 00:21:46.015332 1405 log.go:172] (0xc0008ee000) Reply frame received for 1\nI0224 00:21:46.015403 1405 log.go:172] (0xc0008ee000) (0xc0005aea00) Create stream\nI0224 00:21:46.015417 1405 log.go:172] (0xc0008ee000) (0xc0005aea00) Stream added, broadcasting: 3\nI0224 00:21:46.016627 1405 log.go:172] (0xc0008ee000) Reply frame received for 3\nI0224 00:21:46.016644 1405 log.go:172] (0xc0008ee000) (0xc00021d680) Create stream\nI0224 00:21:46.016649 1405 log.go:172] (0xc0008ee000) (0xc00021d680) Stream added, broadcasting: 5\nI0224 00:21:46.017697 1405 log.go:172] (0xc0008ee000) Reply frame received for 5\nI0224 00:21:46.098131 1405 log.go:172] (0xc0008ee000) Data frame received for 5\nI0224 00:21:46.098177 1405 log.go:172] (0xc00021d680) (5) Data frame handling\nI0224 00:21:46.098219 1405 log.go:172] (0xc00021d680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 00:21:46.131104 1405 log.go:172] (0xc0008ee000) Data frame received for 3\nI0224 00:21:46.131118 1405 log.go:172] (0xc0005aea00) (3) Data frame handling\nI0224 00:21:46.131126 1405 log.go:172] (0xc0005aea00) (3) Data frame sent\nI0224 00:21:46.198266 1405 log.go:172] (0xc0008ee000) Data frame received for 1\nI0224 00:21:46.198300 1405 log.go:172] (0xc00067bf40) (1) Data frame handling\nI0224 00:21:46.198326 1405 log.go:172] (0xc00067bf40) (1) Data frame sent\nI0224 00:21:46.198650 1405 log.go:172] (0xc0008ee000) (0xc00021d680) Stream removed, broadcasting: 5\nI0224 00:21:46.198689 1405 log.go:172] (0xc0008ee000) (0xc0005aea00) Stream removed, broadcasting: 3\nI0224 00:21:46.198716 1405 log.go:172] (0xc0008ee000) (0xc00067bf40) Stream removed, broadcasting: 1\nI0224 00:21:46.198741 1405 log.go:172] (0xc0008ee000) Go away received\nI0224 00:21:46.199526 1405 log.go:172] (0xc0008ee000) (0xc00067bf40) Stream removed, broadcasting: 1\nI0224 00:21:46.199559 1405 log.go:172] (0xc0008ee000) (0xc0005aea00) Stream removed, broadcasting: 3\nI0224 00:21:46.199575 1405 log.go:172] (0xc0008ee000) (0xc00021d680) Stream removed, broadcasting: 5\n" Feb 24 00:21:46.207: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 24 00:21:46.207: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 24 00:21:56.260: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 24 00:22:06.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9372 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:22:06.757: INFO: stderr: "I0224 00:22:06.587854 1425 log.go:172] (0xc000b162c0) (0xc0009e8000) Create stream\nI0224 00:22:06.588500 1425 log.go:172] (0xc000b162c0) (0xc0009e8000) Stream added, broadcasting: 1\nI0224 00:22:06.592334 1425 log.go:172] (0xc000b162c0) Reply frame received for 1\nI0224 00:22:06.592377 1425 log.go:172] (0xc000b162c0) (0xc000962000) Create stream\nI0224 00:22:06.592394 1425 log.go:172] (0xc000b162c0) (0xc000962000) Stream added, broadcasting: 3\nI0224 00:22:06.593334 1425 log.go:172] (0xc000b162c0) Reply frame received for 3\nI0224 00:22:06.593362 1425 log.go:172] (0xc000b162c0) (0xc0009620a0) Create stream\nI0224 00:22:06.593366 1425 log.go:172] (0xc000b162c0) (0xc0009620a0) Stream added, broadcasting: 5\nI0224 00:22:06.595166 1425 log.go:172] (0xc000b162c0) Reply frame received for 5\nI0224 00:22:06.668645 1425 log.go:172] (0xc000b162c0) Data frame received for 5\nI0224 00:22:06.669094 1425 log.go:172] (0xc0009620a0) (5) Data frame handling\nI0224 00:22:06.669213 1425 log.go:172] (0xc0009620a0) (5) Data frame sent\nI0224 00:22:06.669739 1425 log.go:172] (0xc000b162c0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0224 00:22:06.669855 1425 log.go:172] (0xc000962000) (3) Data frame handling\nI0224 00:22:06.670400 1425 log.go:172] (0xc000962000) (3) Data frame sent\nI0224 00:22:06.748793 1425 log.go:172] (0xc000b162c0) Data frame received for 1\nI0224 00:22:06.748890 1425 log.go:172] (0xc000b162c0) (0xc0009620a0) Stream removed, broadcasting: 5\nI0224 00:22:06.748967 1425 log.go:172] (0xc0009e8000) (1) Data frame handling\nI0224 00:22:06.749001 1425 log.go:172] (0xc0009e8000) (1) Data frame sent\nI0224 00:22:06.749133 1425 log.go:172] (0xc000b162c0) (0xc000962000) Stream removed, broadcasting: 3\nI0224 00:22:06.749234 1425 log.go:172] (0xc000b162c0) (0xc0009e8000) Stream removed, broadcasting: 1\nI0224 00:22:06.749268 1425 log.go:172] (0xc000b162c0) Go away received\nI0224 00:22:06.749908 1425 log.go:172] (0xc000b162c0) (0xc0009e8000) Stream removed, broadcasting: 1\nI0224 00:22:06.749921 1425 log.go:172] (0xc000b162c0) (0xc000962000) Stream removed, broadcasting: 3\nI0224 00:22:06.749931 1425 log.go:172] (0xc000b162c0) (0xc0009620a0) Stream removed, broadcasting: 5\n" Feb 24 00:22:06.758: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 24 00:22:06.758: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 24 00:22:16.797: INFO: Waiting for StatefulSet statefulset-9372/ss2 to complete update Feb 24 00:22:16.797: INFO: Waiting for Pod statefulset-9372/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 24 00:22:16.797: INFO: Waiting for Pod statefulset-9372/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 24 00:22:16.797: INFO: Waiting for Pod statefulset-9372/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 24 00:22:26.807: INFO: Waiting for StatefulSet statefulset-9372/ss2 to complete update Feb 24 00:22:26.807: INFO: Waiting for Pod statefulset-9372/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 24 00:22:26.807: INFO: Waiting for Pod statefulset-9372/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 24 00:22:36.830: INFO: Waiting for StatefulSet statefulset-9372/ss2 to complete update Feb 24 00:22:36.830: INFO: Waiting for Pod statefulset-9372/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 24 00:22:46.808: INFO: Waiting for StatefulSet statefulset-9372/ss2 to complete update Feb 24 00:22:46.808: INFO: Waiting for Pod statefulset-9372/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 24 00:22:56.823: INFO: Deleting all statefulset in ns statefulset-9372 Feb 24 00:22:56.877: INFO: Scaling statefulset ss2 to 0 Feb 24 00:23:36.912: INFO: Waiting for statefulset status.replicas updated to 0 Feb 24 00:23:36.917: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:23:36.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9372" for this suite. • [SLOW TEST:212.487 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":108,"skipped":1935,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:23:37.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Feb 24 00:23:37.147: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Feb 24 00:23:37.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8313' Feb 24 00:23:37.586: INFO: stderr: "" Feb 24 00:23:37.586: INFO: stdout: "service/agnhost-slave created\n" Feb 24 00:23:37.586: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Feb 24 00:23:37.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8313' Feb 24 00:23:38.005: INFO: stderr: "" Feb 24 00:23:38.006: INFO: stdout: "service/agnhost-master created\n" Feb 24 00:23:38.007: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 24 00:23:38.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8313' Feb 24 00:23:38.388: INFO: stderr: "" Feb 24 00:23:38.388: INFO: stdout: "service/frontend created\n" Feb 24 00:23:38.390: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Feb 24 00:23:38.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8313' Feb 24 00:23:39.664: INFO: stderr: "" Feb 24 00:23:39.664: INFO: stdout: "deployment.apps/frontend created\n" Feb 24 00:23:39.665: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 24 00:23:39.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8313' Feb 24 00:23:41.597: INFO: stderr: "" Feb 24 00:23:41.597: INFO: stdout: "deployment.apps/agnhost-master created\n" Feb 24 00:23:41.599: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 24 00:23:41.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8313' Feb 24 00:23:42.036: INFO: stderr: "" Feb 24 00:23:42.036: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Feb 24 00:23:42.037: INFO: Waiting for all frontend pods to be Running. Feb 24 00:24:07.092: INFO: Waiting for frontend to serve content. Feb 24 00:24:07.111: INFO: Trying to add a new entry to the guestbook. Feb 24 00:24:07.120: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:12.161: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:17.179: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:22.196: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:27.218: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:32.243: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:37.279: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:42.293: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:47.327: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:52.348: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:24:57.373: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:02.514: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:07.536: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:12.559: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:17.576: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:22.615: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:27.637: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:32.651: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:37.676: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:42.698: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:47.724: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:52.741: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:25:57.764: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:02.777: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:07.804: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:12.816: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:17.837: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:22.855: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:27.887: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:32.922: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:37.947: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:42.969: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:48.011: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:53.030: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:26:58.046: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:27:03.101: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 24 00:27:08.102: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x551f740, 0xc000afbe40, 0xc0047f4f30, 0xc) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 +0x551 k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:420 +0x165 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0023f6d00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc0023f6d00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc0023f6d00, 0x4c9f938) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 STEP: using delete to clean up resources Feb 24 00:27:08.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8313' Feb 24 00:27:08.328: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 00:27:08.328: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Feb 24 00:27:08.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8313' Feb 24 00:27:08.569: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 00:27:08.569: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 24 00:27:08.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8313' Feb 24 00:27:08.793: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 00:27:08.793: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 24 00:27:08.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8313' Feb 24 00:27:08.965: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 00:27:08.965: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 24 00:27:08.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8313' Feb 24 00:27:09.090: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 00:27:09.090: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 24 00:27:09.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8313' Feb 24 00:27:09.220: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 24 00:27:09.221: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubectl-8313". STEP: Found 33 events. Feb 24 00:27:09.284: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-5zcv2: {default-scheduler } Scheduled: Successfully assigned kubectl-8313/agnhost-master-74c46fb7d4-5zcv2 to jerma-node Feb 24 00:27:09.284: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-4ntx7: {default-scheduler } Scheduled: Successfully assigned kubectl-8313/agnhost-slave-774cfc759f-4ntx7 to jerma-node Feb 24 00:27:09.284: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-5dmfh: {default-scheduler } Scheduled: Successfully assigned kubectl-8313/agnhost-slave-774cfc759f-5dmfh to jerma-server-mvvl6gufaqub Feb 24 00:27:09.284: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-j5j6m: {default-scheduler } Scheduled: Successfully assigned kubectl-8313/frontend-6c5f89d5d4-j5j6m to jerma-node Feb 24 00:27:09.284: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-j8cg4: {default-scheduler } Scheduled: Successfully assigned kubectl-8313/frontend-6c5f89d5d4-j8cg4 to jerma-server-mvvl6gufaqub Feb 24 00:27:09.284: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-ltvvd: {default-scheduler } Scheduled: Successfully assigned kubectl-8313/frontend-6c5f89d5d4-ltvvd to jerma-node Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:39 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3 Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:41 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1 Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:41 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-5zcv2 Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:41 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-j5j6m Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:41 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-j8cg4 Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:41 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-ltvvd Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:42 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2 Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:42 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-5dmfh Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:42 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-4ntx7 Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:50 +0000 UTC - event for agnhost-master-74c46fb7d4-5zcv2: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:50 +0000 UTC - event for agnhost-slave-774cfc759f-5dmfh: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:50 +0000 UTC - event for frontend-6c5f89d5d4-j8cg4: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:51 +0000 UTC - event for frontend-6c5f89d5d4-ltvvd: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:54 +0000 UTC - event for frontend-6c5f89d5d4-j5j6m: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:55 +0000 UTC - event for agnhost-slave-774cfc759f-5dmfh: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:55 +0000 UTC - event for frontend-6c5f89d5d4-j8cg4: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend Feb 24 00:27:09.284: INFO: At 2020-02-24 00:23:56 +0000 UTC - event for agnhost-slave-774cfc759f-5dmfh: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave Feb 24 00:27:09.285: INFO: At 2020-02-24 00:23:56 +0000 UTC - event for frontend-6c5f89d5d4-j8cg4: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend Feb 24 00:27:09.285: INFO: At 2020-02-24 00:23:57 +0000 UTC - event for agnhost-slave-774cfc759f-4ntx7: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 24 00:27:09.285: INFO: At 2020-02-24 00:23:59 +0000 UTC - event for agnhost-master-74c46fb7d4-5zcv2: {kubelet jerma-node} Created: Created container master Feb 24 00:27:09.285: INFO: At 2020-02-24 00:23:59 +0000 UTC - event for frontend-6c5f89d5d4-j5j6m: {kubelet jerma-node} Created: Created container guestbook-frontend Feb 24 00:27:09.285: INFO: At 2020-02-24 00:23:59 +0000 UTC - event for frontend-6c5f89d5d4-ltvvd: {kubelet jerma-node} Created: Created container guestbook-frontend Feb 24 00:27:09.285: INFO: At 2020-02-24 00:24:00 +0000 UTC - event for agnhost-slave-774cfc759f-4ntx7: {kubelet jerma-node} Created: Created container slave Feb 24 00:27:09.285: INFO: At 2020-02-24 00:24:01 +0000 UTC - event for agnhost-master-74c46fb7d4-5zcv2: {kubelet jerma-node} Started: Started container master Feb 24 00:27:09.285: INFO: At 2020-02-24 00:24:01 +0000 UTC - event for agnhost-slave-774cfc759f-4ntx7: {kubelet jerma-node} Started: Started container slave Feb 24 00:27:09.285: INFO: At 2020-02-24 00:24:01 +0000 UTC - event for frontend-6c5f89d5d4-j5j6m: {kubelet jerma-node} Started: Started container guestbook-frontend Feb 24 00:27:09.285: INFO: At 2020-02-24 00:24:01 +0000 UTC - event for frontend-6c5f89d5d4-ltvvd: {kubelet jerma-node} Started: Started container guestbook-frontend Feb 24 00:27:09.306: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 00:27:09.306: INFO: agnhost-master-74c46fb7d4-5zcv2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:24:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:24:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:41 +0000 UTC }] Feb 24 00:27:09.307: INFO: agnhost-slave-774cfc759f-4ntx7 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:24:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:24:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:42 +0000 UTC }] Feb 24 00:27:09.307: INFO: agnhost-slave-774cfc759f-5dmfh jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:42 +0000 UTC }] Feb 24 00:27:09.307: INFO: frontend-6c5f89d5d4-j5j6m jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:24:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:24:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:41 +0000 UTC }] Feb 24 00:27:09.307: INFO: frontend-6c5f89d5d4-j8cg4 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:41 +0000 UTC }] Feb 24 00:27:09.307: INFO: frontend-6c5f89d5d4-ltvvd jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:24:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:24:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:23:41 +0000 UTC }] Feb 24 00:27:09.307: INFO: Feb 24 00:27:09.345: INFO: Logging node info for node jerma-node Feb 24 00:27:09.424: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 10328101 0 2020-01-04 11:59:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-24 00:26:43 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-24 00:26:43 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-24 00:26:43 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-24 00:26:43 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 24 00:27:09.429: INFO: Logging kubelet events for node jerma-node Feb 24 00:27:09.448: INFO: Logging pods the kubelet thinks is on node jerma-node Feb 24 00:27:09.502: INFO: frontend-6c5f89d5d4-ltvvd started at 2020-02-24 00:23:41 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.503: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 24 00:27:09.503: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.503: INFO: Container kube-proxy ready: true, restart count 0 Feb 24 00:27:09.503: INFO: agnhost-master-74c46fb7d4-5zcv2 started at 2020-02-24 00:23:42 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.503: INFO: Container master ready: true, restart count 0 Feb 24 00:27:09.503: INFO: frontend-6c5f89d5d4-j5j6m started at 2020-02-24 00:23:42 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.503: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 24 00:27:09.503: INFO: agnhost-slave-774cfc759f-4ntx7 started at 2020-02-24 00:23:44 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.503: INFO: Container slave ready: true, restart count 0 Feb 24 00:27:09.503: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded) Feb 24 00:27:09.503: INFO: Container weave ready: true, restart count 1 Feb 24 00:27:09.503: INFO: Container weave-npc ready: true, restart count 0 W0224 00:27:09.568126 10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 24 00:27:09.649: INFO: Latency metrics for node jerma-node Feb 24 00:27:09.649: INFO: Logging node info for node jerma-server-mvvl6gufaqub Feb 24 00:27:09.670: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 10327383 0 2020-01-04 11:47:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-24 00:22:56 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-24 00:22:56 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-24 00:22:56 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-24 00:22:56 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 24 00:27:09.672: INFO: Logging kubelet events for node jerma-server-mvvl6gufaqub Feb 24 00:27:09.676: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub Feb 24 00:27:09.696: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.696: INFO: Container coredns ready: true, restart count 0 Feb 24 00:27:09.696: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.697: INFO: Container coredns ready: true, restart count 0 Feb 24 00:27:09.697: INFO: agnhost-slave-774cfc759f-5dmfh started at 2020-02-24 00:23:42 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.697: INFO: Container slave ready: true, restart count 0 Feb 24 00:27:09.697: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.697: INFO: Container kube-controller-manager ready: true, restart count 17 Feb 24 00:27:09.697: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.697: INFO: Container kube-proxy ready: true, restart count 0 Feb 24 00:27:09.697: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded) Feb 24 00:27:09.697: INFO: Container weave ready: true, restart count 0 Feb 24 00:27:09.697: INFO: Container weave-npc ready: true, restart count 0 Feb 24 00:27:09.697: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.697: INFO: Container kube-scheduler ready: true, restart count 23 Feb 24 00:27:09.697: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.697: INFO: Container kube-apiserver ready: true, restart count 1 Feb 24 00:27:09.697: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.697: INFO: Container etcd ready: true, restart count 1 Feb 24 00:27:09.697: INFO: frontend-6c5f89d5d4-j8cg4 started at 2020-02-24 00:23:41 +0000 UTC (0+1 container statuses recorded) Feb 24 00:27:09.697: INFO: Container guestbook-frontend ready: true, restart count 0 W0224 00:27:09.701069 10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 24 00:27:09.748: INFO: Latency metrics for node jerma-server-mvvl6gufaqub Feb 24 00:27:09.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8313" for this suite. • Failure [212.751 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388 should create and stop a working application [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 24 00:27:08.102: Cannot added new entry in 180 seconds. /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":280,"completed":108,"skipped":1939,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:27:09.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 24 00:27:12.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a" in namespace "downward-api-4177" to be "success or failure" Feb 24 00:27:12.043: INFO: Pod "downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.602761ms Feb 24 00:27:15.057: INFO: Pod "downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.034889294s Feb 24 00:27:17.122: INFO: Pod "downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.099614593s Feb 24 00:27:19.150: INFO: Pod "downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.127952172s Feb 24 00:27:21.157: INFO: Pod "downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.134814066s Feb 24 00:27:23.164: INFO: Pod "downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.141637601s Feb 24 00:27:25.171: INFO: Pod "downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.148028192s STEP: Saw pod success Feb 24 00:27:25.171: INFO: Pod "downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a" satisfied condition "success or failure" Feb 24 00:27:25.174: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a container client-container: STEP: delete the pod Feb 24 00:27:25.233: INFO: Waiting for pod downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a to disappear Feb 24 00:27:25.238: INFO: Pod downwardapi-volume-04439204-ffce-43bb-9d8e-f8413105471a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:27:25.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4177" for this suite. • [SLOW TEST:15.498 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":109,"skipped":1944,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:27:25.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 24 00:27:25.469: INFO: Waiting up to 5m0s for pod "pod-fd752230-2302-452c-b39e-e096b570f2d1" in namespace "emptydir-3020" to be "success or failure" Feb 24 00:27:25.557: INFO: Pod "pod-fd752230-2302-452c-b39e-e096b570f2d1": Phase="Pending", Reason="", readiness=false. Elapsed: 87.99445ms Feb 24 00:27:27.567: INFO: Pod "pod-fd752230-2302-452c-b39e-e096b570f2d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097997222s Feb 24 00:27:29.571: INFO: Pod "pod-fd752230-2302-452c-b39e-e096b570f2d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102616123s Feb 24 00:27:31.581: INFO: Pod "pod-fd752230-2302-452c-b39e-e096b570f2d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112510406s Feb 24 00:27:33.591: INFO: Pod "pod-fd752230-2302-452c-b39e-e096b570f2d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121711236s STEP: Saw pod success Feb 24 00:27:33.591: INFO: Pod "pod-fd752230-2302-452c-b39e-e096b570f2d1" satisfied condition "success or failure" Feb 24 00:27:33.595: INFO: Trying to get logs from node jerma-node pod pod-fd752230-2302-452c-b39e-e096b570f2d1 container test-container: STEP: delete the pod Feb 24 00:27:33.656: INFO: Waiting for pod pod-fd752230-2302-452c-b39e-e096b570f2d1 to disappear Feb 24 00:27:33.674: INFO: Pod pod-fd752230-2302-452c-b39e-e096b570f2d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:27:33.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3020" for this suite. • [SLOW TEST:8.427 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":110,"skipped":1948,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:27:33.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Feb 24 00:27:44.365: INFO: Successfully updated pod "annotationupdateb25c7aec-5dd2-480b-8106-6f25a742f464" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:27:46.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3429" for this suite. • [SLOW TEST:12.793 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":111,"skipped":1952,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:27:46.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-9f08c1c1-f670-4d11-95ac-cb1dc660de12 STEP: Creating a pod to test consume configMaps Feb 24 00:27:46.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340" in namespace "configmap-514" to be "success or failure" Feb 24 00:27:46.633: INFO: Pod "pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340": Phase="Pending", Reason="", readiness=false. Elapsed: 11.321032ms Feb 24 00:27:49.100: INFO: Pod "pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478531773s Feb 24 00:27:51.105: INFO: Pod "pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483851709s Feb 24 00:27:53.116: INFO: Pod "pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494512486s Feb 24 00:27:55.142: INFO: Pod "pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519849027s Feb 24 00:27:57.146: INFO: Pod "pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.52464s STEP: Saw pod success Feb 24 00:27:57.146: INFO: Pod "pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340" satisfied condition "success or failure" Feb 24 00:27:57.149: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340 container configmap-volume-test: STEP: delete the pod Feb 24 00:27:57.178: INFO: Waiting for pod pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340 to disappear Feb 24 00:27:57.220: INFO: Pod pod-configmaps-e4079fc3-552f-4c35-8a8d-a314375c0340 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:27:57.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-514" for this suite. • [SLOW TEST:10.875 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":112,"skipped":1963,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:27:57.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-5397 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 24 00:27:57.548: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 24 00:27:57.666: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:27:59.705: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:28:01.671: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:28:04.089: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:28:05.746: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:28:07.672: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:28:09.675: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:28:11.692: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:28:13.704: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:28:15.674: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:28:17.677: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 24 00:28:17.694: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 24 00:28:19.746: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 24 00:28:21.709: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 24 00:28:23.715: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 24 00:28:25.701: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 24 00:28:33.737: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5397 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 24 00:28:33.737: INFO: >>> kubeConfig: /root/.kube/config I0224 00:28:33.815372 10 log.go:172] (0xc001942420) (0xc00170d4a0) Create stream I0224 00:28:33.815709 10 log.go:172] (0xc001942420) (0xc00170d4a0) Stream added, broadcasting: 1 I0224 00:28:33.823074 10 log.go:172] (0xc001942420) Reply frame received for 1 I0224 00:28:33.823204 10 log.go:172] (0xc001942420) (0xc002500640) Create stream I0224 00:28:33.823243 10 log.go:172] (0xc001942420) (0xc002500640) Stream added, broadcasting: 3 I0224 00:28:33.825343 10 log.go:172] (0xc001942420) Reply frame received for 3 I0224 00:28:33.825378 10 log.go:172] (0xc001942420) (0xc002566640) Create stream I0224 00:28:33.825404 10 log.go:172] (0xc001942420) (0xc002566640) Stream added, broadcasting: 5 I0224 00:28:33.827875 10 log.go:172] (0xc001942420) Reply frame received for 5 I0224 00:28:33.960557 10 log.go:172] (0xc001942420) Data frame received for 3 I0224 00:28:33.960731 10 log.go:172] (0xc002500640) (3) Data frame handling I0224 00:28:33.960763 10 log.go:172] (0xc002500640) (3) Data frame sent I0224 00:28:34.108423 10 log.go:172] (0xc001942420) (0xc002500640) Stream removed, broadcasting: 3 I0224 00:28:34.108607 10 log.go:172] (0xc001942420) Data frame received for 1 I0224 00:28:34.108662 10 log.go:172] (0xc00170d4a0) (1) Data frame handling I0224 00:28:34.108705 10 log.go:172] (0xc00170d4a0) (1) Data frame sent I0224 00:28:34.108754 10 log.go:172] (0xc001942420) (0xc00170d4a0) Stream removed, broadcasting: 1 I0224 00:28:34.108863 10 log.go:172] (0xc001942420) (0xc002566640) Stream removed, broadcasting: 5 I0224 00:28:34.109216 10 log.go:172] (0xc001942420) (0xc00170d4a0) Stream removed, broadcasting: 1 I0224 00:28:34.109236 10 log.go:172] (0xc001942420) (0xc002500640) Stream removed, broadcasting: 3 I0224 00:28:34.109257 10 log.go:172] (0xc001942420) (0xc002566640) Stream removed, broadcasting: 5 I0224 00:28:34.109403 10 log.go:172] (0xc001942420) Go away received Feb 24 00:28:34.109: INFO: Waiting for responses: map[] Feb 24 00:28:34.229: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5397 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 24 00:28:34.229: INFO: >>> kubeConfig: /root/.kube/config I0224 00:28:34.283043 10 log.go:172] (0xc002667970) (0xc00190c1e0) Create stream I0224 00:28:34.283531 10 log.go:172] (0xc002667970) (0xc00190c1e0) Stream added, broadcasting: 1 I0224 00:28:34.287278 10 log.go:172] (0xc002667970) Reply frame received for 1 I0224 00:28:34.287339 10 log.go:172] (0xc002667970) (0xc00190c460) Create stream I0224 00:28:34.287369 10 log.go:172] (0xc002667970) (0xc00190c460) Stream added, broadcasting: 3 I0224 00:28:34.289046 10 log.go:172] (0xc002667970) Reply frame received for 3 I0224 00:28:34.289145 10 log.go:172] (0xc002667970) (0xc00190c500) Create stream I0224 00:28:34.289175 10 log.go:172] (0xc002667970) (0xc00190c500) Stream added, broadcasting: 5 I0224 00:28:34.290981 10 log.go:172] (0xc002667970) Reply frame received for 5 I0224 00:28:34.375363 10 log.go:172] (0xc002667970) Data frame received for 3 I0224 00:28:34.375468 10 log.go:172] (0xc00190c460) (3) Data frame handling I0224 00:28:34.375501 10 log.go:172] (0xc00190c460) (3) Data frame sent I0224 00:28:34.459463 10 log.go:172] (0xc002667970) Data frame received for 1 I0224 00:28:34.459603 10 log.go:172] (0xc002667970) (0xc00190c500) Stream removed, broadcasting: 5 I0224 00:28:34.459642 10 log.go:172] (0xc00190c1e0) (1) Data frame handling I0224 00:28:34.459667 10 log.go:172] (0xc00190c1e0) (1) Data frame sent I0224 00:28:34.459714 10 log.go:172] (0xc002667970) (0xc00190c460) Stream removed, broadcasting: 3 I0224 00:28:34.459743 10 log.go:172] (0xc002667970) (0xc00190c1e0) Stream removed, broadcasting: 1 I0224 00:28:34.459764 10 log.go:172] (0xc002667970) Go away received I0224 00:28:34.460074 10 log.go:172] (0xc002667970) (0xc00190c1e0) Stream removed, broadcasting: 1 I0224 00:28:34.460093 10 log.go:172] (0xc002667970) (0xc00190c460) Stream removed, broadcasting: 3 I0224 00:28:34.460108 10 log.go:172] (0xc002667970) (0xc00190c500) Stream removed, broadcasting: 5 Feb 24 00:28:34.460: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:28:34.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5397" for this suite. • [SLOW TEST:37.118 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":113,"skipped":1970,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:28:34.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Feb 24 00:28:34.650: INFO: Waiting up to 5m0s for pod "var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da" in namespace "var-expansion-7029" to be "success or failure" Feb 24 00:28:34.667: INFO: Pod "var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da": Phase="Pending", Reason="", readiness=false. Elapsed: 17.241732ms Feb 24 00:28:37.316: INFO: Pod "var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665991884s Feb 24 00:28:39.341: INFO: Pod "var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.690689943s Feb 24 00:28:41.349: INFO: Pod "var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.699217166s Feb 24 00:28:43.648: INFO: Pod "var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da": Phase="Pending", Reason="", readiness=false. Elapsed: 8.997856919s Feb 24 00:28:45.658: INFO: Pod "var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da": Phase="Pending", Reason="", readiness=false. Elapsed: 11.008348877s Feb 24 00:28:47.666: INFO: Pod "var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.016320611s STEP: Saw pod success Feb 24 00:28:47.666: INFO: Pod "var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da" satisfied condition "success or failure" Feb 24 00:28:47.675: INFO: Trying to get logs from node jerma-node pod var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da container dapi-container: STEP: delete the pod Feb 24 00:28:49.783: INFO: Waiting for pod var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da to disappear Feb 24 00:28:49.801: INFO: Pod var-expansion-5dc4a1bf-0177-431b-87a2-ceaf0b7295da no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:28:49.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7029" for this suite. • [SLOW TEST:15.370 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":114,"skipped":1970,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:28:49.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-e65a880a-62bf-4b2f-9649-4bf913e85726 STEP: Creating secret with name s-test-opt-upd-67ba1ba6-fec3-4aa2-ba80-652d290b5df3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e65a880a-62bf-4b2f-9649-4bf913e85726 STEP: Updating secret s-test-opt-upd-67ba1ba6-fec3-4aa2-ba80-652d290b5df3 STEP: Creating secret with name s-test-opt-create-227dc042-ef49-4591-a5d1-e49b4295361a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:29:02.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6516" for this suite. • [SLOW TEST:12.812 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":115,"skipped":1992,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:29:02.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 24 00:29:02.805: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 24 00:29:02.823: INFO: Number of nodes with available pods: 0 Feb 24 00:29:02.824: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 24 00:29:02.868: INFO: Number of nodes with available pods: 0 Feb 24 00:29:02.868: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:03.881: INFO: Number of nodes with available pods: 0 Feb 24 00:29:03.881: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:04.877: INFO: Number of nodes with available pods: 0 Feb 24 00:29:04.877: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:05.880: INFO: Number of nodes with available pods: 0 Feb 24 00:29:05.880: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:07.164: INFO: Number of nodes with available pods: 0 Feb 24 00:29:07.164: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:07.897: INFO: Number of nodes with available pods: 0 Feb 24 00:29:07.897: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:09.005: INFO: Number of nodes with available pods: 0 Feb 24 00:29:09.005: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:09.906: INFO: Number of nodes with available pods: 0 Feb 24 00:29:09.906: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:10.887: INFO: Number of nodes with available pods: 0 Feb 24 00:29:10.887: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:11.891: INFO: Number of nodes with available pods: 1 Feb 24 00:29:11.891: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 24 00:29:11.945: INFO: Number of nodes with available pods: 1 Feb 24 00:29:11.945: INFO: Number of running nodes: 0, number of available pods: 1 Feb 24 00:29:12.989: INFO: Number of nodes with available pods: 0 Feb 24 00:29:12.989: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 24 00:29:13.039: INFO: Number of nodes with available pods: 0 Feb 24 00:29:13.039: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:14.174: INFO: Number of nodes with available pods: 0 Feb 24 00:29:14.174: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:15.055: INFO: Number of nodes with available pods: 0 Feb 24 00:29:15.055: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:16.047: INFO: Number of nodes with available pods: 0 Feb 24 00:29:16.047: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:17.056: INFO: Number of nodes with available pods: 0 Feb 24 00:29:17.056: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:18.054: INFO: Number of nodes with available pods: 0 Feb 24 00:29:18.054: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:19.069: INFO: Number of nodes with available pods: 0 Feb 24 00:29:19.069: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:20.045: INFO: Number of nodes with available pods: 0 Feb 24 00:29:20.045: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:21.043: INFO: Number of nodes with available pods: 0 Feb 24 00:29:21.044: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:22.918: INFO: Number of nodes with available pods: 0 Feb 24 00:29:22.918: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:23.370: INFO: Number of nodes with available pods: 0 Feb 24 00:29:23.370: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:24.048: INFO: Number of nodes with available pods: 0 Feb 24 00:29:24.048: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:25.187: INFO: Number of nodes with available pods: 0 Feb 24 00:29:25.188: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:26.044: INFO: Number of nodes with available pods: 0 Feb 24 00:29:26.045: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:29:27.044: INFO: Number of nodes with available pods: 1 Feb 24 00:29:27.044: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7697, will wait for the garbage collector to delete the pods Feb 24 00:29:27.116: INFO: Deleting DaemonSet.extensions daemon-set took: 12.505552ms Feb 24 00:29:27.417: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.513786ms Feb 24 00:29:33.447: INFO: Number of nodes with available pods: 0 Feb 24 00:29:33.447: INFO: Number of running nodes: 0, number of available pods: 0 Feb 24 00:29:33.451: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7697/daemonsets","resourceVersion":"10328845"},"items":null} Feb 24 00:29:33.453: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7697/pods","resourceVersion":"10328845"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:29:33.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7697" for this suite. • [SLOW TEST:30.942 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":116,"skipped":1996,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:29:33.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 24 00:29:34.684: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 24 00:29:36.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:29:38.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:29:40.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718100974, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 24 00:29:43.772: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:29:56.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2865" for this suite. STEP: Destroying namespace "webhook-2865-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:23.604 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":117,"skipped":2016,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:29:57.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 24 00:29:57.555: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5342 /api/v1/namespaces/watch-5342/configmaps/e2e-watch-test-label-changed 9405e20f-d350-4a89-8d0c-804edc51e533 10328986 0 2020-02-24 00:29:57 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 24 00:29:57.555: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5342 /api/v1/namespaces/watch-5342/configmaps/e2e-watch-test-label-changed 9405e20f-d350-4a89-8d0c-804edc51e533 10328987 0 2020-02-24 00:29:57 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 24 00:29:57.556: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5342 /api/v1/namespaces/watch-5342/configmaps/e2e-watch-test-label-changed 9405e20f-d350-4a89-8d0c-804edc51e533 10328988 0 2020-02-24 00:29:57 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 24 00:30:07.621: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5342 /api/v1/namespaces/watch-5342/configmaps/e2e-watch-test-label-changed 9405e20f-d350-4a89-8d0c-804edc51e533 10329029 0 2020-02-24 00:29:57 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 24 00:30:07.622: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5342 /api/v1/namespaces/watch-5342/configmaps/e2e-watch-test-label-changed 9405e20f-d350-4a89-8d0c-804edc51e533 10329030 0 2020-02-24 00:29:57 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 24 00:30:07.622: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5342 /api/v1/namespaces/watch-5342/configmaps/e2e-watch-test-label-changed 9405e20f-d350-4a89-8d0c-804edc51e533 10329031 0 2020-02-24 00:29:57 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:30:07.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5342" for this suite. • [SLOW TEST:10.466 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":118,"skipped":2025,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:30:07.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 24 00:30:07.842: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 24 00:30:12.907: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:30:13.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5549" for this suite. • [SLOW TEST:5.444 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":119,"skipped":2046,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:30:13.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4342 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating stateful set ss in namespace statefulset-4342 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4342 Feb 24 00:30:13.416: INFO: Found 0 stateful pods, waiting for 1 Feb 24 00:30:23.426: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 24 00:30:33.432: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 24 00:30:33.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 24 00:30:36.364: INFO: stderr: "I0224 00:30:36.142051 1687 log.go:172] (0xc0007a4a50) (0xc0006b7b80) Create stream\nI0224 00:30:36.142111 1687 log.go:172] (0xc0007a4a50) (0xc0006b7b80) Stream added, broadcasting: 1\nI0224 00:30:36.146906 1687 log.go:172] (0xc0007a4a50) Reply frame received for 1\nI0224 00:30:36.146998 1687 log.go:172] (0xc0007a4a50) (0xc0008a40a0) Create stream\nI0224 00:30:36.147017 1687 log.go:172] (0xc0007a4a50) (0xc0008a40a0) Stream added, broadcasting: 3\nI0224 00:30:36.150651 1687 log.go:172] (0xc0007a4a50) Reply frame received for 3\nI0224 00:30:36.150716 1687 log.go:172] (0xc0007a4a50) (0xc000302000) Create stream\nI0224 00:30:36.150734 1687 log.go:172] (0xc0007a4a50) (0xc000302000) Stream added, broadcasting: 5\nI0224 00:30:36.155356 1687 log.go:172] (0xc0007a4a50) Reply frame received for 5\nI0224 00:30:36.243227 1687 log.go:172] (0xc0007a4a50) Data frame received for 5\nI0224 00:30:36.243281 1687 log.go:172] (0xc000302000) (5) Data frame handling\nI0224 00:30:36.243298 1687 log.go:172] (0xc000302000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 00:30:36.271964 1687 log.go:172] (0xc0007a4a50) Data frame received for 3\nI0224 00:30:36.272118 1687 log.go:172] (0xc0008a40a0) (3) Data frame handling\nI0224 00:30:36.272167 1687 log.go:172] (0xc0008a40a0) (3) Data frame sent\nI0224 00:30:36.352442 1687 log.go:172] (0xc0007a4a50) Data frame received for 1\nI0224 00:30:36.352490 1687 log.go:172] (0xc0006b7b80) (1) Data frame handling\nI0224 00:30:36.352506 1687 log.go:172] (0xc0006b7b80) (1) Data frame sent\nI0224 00:30:36.352530 1687 log.go:172] (0xc0007a4a50) (0xc0006b7b80) Stream removed, broadcasting: 1\nI0224 00:30:36.353813 1687 log.go:172] (0xc0007a4a50) (0xc0008a40a0) Stream removed, broadcasting: 3\nI0224 00:30:36.353858 1687 log.go:172] (0xc0007a4a50) (0xc000302000) Stream removed, broadcasting: 5\nI0224 00:30:36.353885 1687 log.go:172] (0xc0007a4a50) Go away received\nI0224 00:30:36.353918 1687 log.go:172] (0xc0007a4a50) (0xc0006b7b80) Stream removed, broadcasting: 1\nI0224 00:30:36.353937 1687 log.go:172] (0xc0007a4a50) (0xc0008a40a0) Stream removed, broadcasting: 3\nI0224 00:30:36.353956 1687 log.go:172] (0xc0007a4a50) (0xc000302000) Stream removed, broadcasting: 5\n" Feb 24 00:30:36.364: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 24 00:30:36.364: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 24 00:30:36.396: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 24 00:30:46.407: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 24 00:30:46.407: INFO: Waiting for statefulset status.replicas updated to 0 Feb 24 00:30:46.477: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 00:30:46.477: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC }] Feb 24 00:30:46.478: INFO: Feb 24 00:30:46.478: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 24 00:30:48.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.950070674s Feb 24 00:30:49.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.191872332s Feb 24 00:30:50.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974544187s Feb 24 00:30:51.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.904180297s Feb 24 00:30:52.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.890522855s Feb 24 00:30:56.914: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.825178265s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4342 Feb 24 00:30:58.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:30:58.801: INFO: stderr: "I0224 00:30:58.569910 1714 log.go:172] (0xc000959340) (0xc0009145a0) Create stream\nI0224 00:30:58.570200 1714 log.go:172] (0xc000959340) (0xc0009145a0) Stream added, broadcasting: 1\nI0224 00:30:58.587309 1714 log.go:172] (0xc000959340) Reply frame received for 1\nI0224 00:30:58.587409 1714 log.go:172] (0xc000959340) (0xc000940000) Create stream\nI0224 00:30:58.587422 1714 log.go:172] (0xc000959340) (0xc000940000) Stream added, broadcasting: 3\nI0224 00:30:58.589082 1714 log.go:172] (0xc000959340) Reply frame received for 3\nI0224 00:30:58.589122 1714 log.go:172] (0xc000959340) (0xc000914640) Create stream\nI0224 00:30:58.589137 1714 log.go:172] (0xc000959340) (0xc000914640) Stream added, broadcasting: 5\nI0224 00:30:58.590811 1714 log.go:172] (0xc000959340) Reply frame received for 5\nI0224 00:30:58.719087 1714 log.go:172] (0xc000959340) Data frame received for 3\nI0224 00:30:58.719148 1714 log.go:172] (0xc000940000) (3) Data frame handling\nI0224 00:30:58.719160 1714 log.go:172] (0xc000940000) (3) Data frame sent\nI0224 00:30:58.719185 1714 log.go:172] (0xc000959340) Data frame received for 5\nI0224 00:30:58.719191 1714 log.go:172] (0xc000914640) (5) Data frame handling\nI0224 00:30:58.719204 1714 log.go:172] (0xc000914640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0224 00:30:58.793822 1714 log.go:172] (0xc000959340) (0xc000940000) Stream removed, broadcasting: 3\nI0224 00:30:58.794022 1714 log.go:172] (0xc000959340) Data frame received for 1\nI0224 00:30:58.794053 1714 log.go:172] (0xc0009145a0) (1) Data frame handling\nI0224 00:30:58.794090 1714 log.go:172] (0xc0009145a0) (1) Data frame sent\nI0224 00:30:58.794100 1714 log.go:172] (0xc000959340) (0xc000914640) Stream removed, broadcasting: 5\nI0224 00:30:58.794134 1714 log.go:172] (0xc000959340) (0xc0009145a0) Stream removed, broadcasting: 1\nI0224 00:30:58.794768 1714 log.go:172] (0xc000959340) Go away received\nI0224 00:30:58.794825 1714 log.go:172] (0xc000959340) (0xc0009145a0) Stream removed, broadcasting: 1\nI0224 00:30:58.794881 1714 log.go:172] (0xc000959340) (0xc000940000) Stream removed, broadcasting: 3\nI0224 00:30:58.794918 1714 log.go:172] (0xc000959340) (0xc000914640) Stream removed, broadcasting: 5\n" Feb 24 00:30:58.801: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 24 00:30:58.801: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 24 00:30:58.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:30:59.187: INFO: rc: 1 Feb 24 00:30:59.187: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 24 00:31:09.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:31:09.574: INFO: stderr: "I0224 00:31:09.397743 1746 log.go:172] (0xc0000f5600) (0xc00097e000) Create stream\nI0224 00:31:09.398078 1746 log.go:172] (0xc0000f5600) (0xc00097e000) Stream added, broadcasting: 1\nI0224 00:31:09.402078 1746 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0224 00:31:09.402114 1746 log.go:172] (0xc0000f5600) (0xc0005ffc20) Create stream\nI0224 00:31:09.402123 1746 log.go:172] (0xc0000f5600) (0xc0005ffc20) Stream added, broadcasting: 3\nI0224 00:31:09.403762 1746 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0224 00:31:09.403840 1746 log.go:172] (0xc0000f5600) (0xc000202000) Create stream\nI0224 00:31:09.403848 1746 log.go:172] (0xc0000f5600) (0xc000202000) Stream added, broadcasting: 5\nI0224 00:31:09.405330 1746 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0224 00:31:09.472363 1746 log.go:172] (0xc0000f5600) Data frame received for 5\nI0224 00:31:09.472541 1746 log.go:172] (0xc000202000) (5) Data frame handling\nI0224 00:31:09.472563 1746 log.go:172] (0xc000202000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0224 00:31:09.472663 1746 log.go:172] (0xc0000f5600) Data frame received for 3\nI0224 00:31:09.472697 1746 log.go:172] (0xc0005ffc20) (3) Data frame handling\nI0224 00:31:09.472717 1746 log.go:172] (0xc0005ffc20) (3) Data frame sent\nI0224 00:31:09.566114 1746 log.go:172] (0xc0000f5600) Data frame received for 1\nI0224 00:31:09.566176 1746 log.go:172] (0xc00097e000) (1) Data frame handling\nI0224 00:31:09.566195 1746 log.go:172] (0xc00097e000) (1) Data frame sent\nI0224 00:31:09.566218 1746 log.go:172] (0xc0000f5600) (0xc00097e000) Stream removed, broadcasting: 1\nI0224 00:31:09.566282 1746 log.go:172] (0xc0000f5600) (0xc0005ffc20) Stream removed, broadcasting: 3\nI0224 00:31:09.567110 1746 log.go:172] (0xc0000f5600) (0xc000202000) Stream removed, broadcasting: 5\nI0224 00:31:09.567136 1746 log.go:172] (0xc0000f5600) (0xc00097e000) Stream removed, broadcasting: 1\nI0224 00:31:09.567148 1746 log.go:172] (0xc0000f5600) (0xc0005ffc20) Stream removed, broadcasting: 3\nI0224 00:31:09.567162 1746 log.go:172] (0xc0000f5600) (0xc000202000) Stream removed, broadcasting: 5\n" Feb 24 00:31:09.575: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 24 00:31:09.575: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 24 00:31:09.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:31:09.882: INFO: stderr: "I0224 00:31:09.723341 1768 log.go:172] (0xc0009860b0) (0xc0005c2960) Create stream\nI0224 00:31:09.723665 1768 log.go:172] (0xc0009860b0) (0xc0005c2960) Stream added, broadcasting: 1\nI0224 00:31:09.726457 1768 log.go:172] (0xc0009860b0) Reply frame received for 1\nI0224 00:31:09.726496 1768 log.go:172] (0xc0009860b0) (0xc0006ff5e0) Create stream\nI0224 00:31:09.726509 1768 log.go:172] (0xc0009860b0) (0xc0006ff5e0) Stream added, broadcasting: 3\nI0224 00:31:09.728961 1768 log.go:172] (0xc0009860b0) Reply frame received for 3\nI0224 00:31:09.729018 1768 log.go:172] (0xc0009860b0) (0xc000adc000) Create stream\nI0224 00:31:09.729033 1768 log.go:172] (0xc0009860b0) (0xc000adc000) Stream added, broadcasting: 5\nI0224 00:31:09.730119 1768 log.go:172] (0xc0009860b0) Reply frame received for 5\nI0224 00:31:09.800173 1768 log.go:172] (0xc0009860b0) Data frame received for 3\nI0224 00:31:09.800245 1768 log.go:172] (0xc0006ff5e0) (3) Data frame handling\nI0224 00:31:09.800287 1768 log.go:172] (0xc0006ff5e0) (3) Data frame sent\nI0224 00:31:09.800552 1768 log.go:172] (0xc0009860b0) Data frame received for 5\nI0224 00:31:09.800563 1768 log.go:172] (0xc000adc000) (5) Data frame handling\nI0224 00:31:09.800570 1768 log.go:172] (0xc000adc000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0224 00:31:09.871666 1768 log.go:172] (0xc0009860b0) Data frame received for 1\nI0224 00:31:09.871708 1768 log.go:172] (0xc0005c2960) (1) Data frame handling\nI0224 00:31:09.871724 1768 log.go:172] (0xc0005c2960) (1) Data frame sent\nI0224 00:31:09.871765 1768 log.go:172] (0xc0009860b0) (0xc0005c2960) Stream removed, broadcasting: 1\nI0224 00:31:09.871851 1768 log.go:172] (0xc0009860b0) (0xc0006ff5e0) Stream removed, broadcasting: 3\nI0224 00:31:09.871876 1768 log.go:172] (0xc0009860b0) (0xc000adc000) Stream removed, broadcasting: 5\nI0224 00:31:09.871904 1768 log.go:172] (0xc0009860b0) Go away received\nI0224 00:31:09.872181 1768 log.go:172] (0xc0009860b0) (0xc0005c2960) Stream removed, broadcasting: 1\nI0224 00:31:09.872197 1768 log.go:172] (0xc0009860b0) (0xc0006ff5e0) Stream removed, broadcasting: 3\nI0224 00:31:09.872203 1768 log.go:172] (0xc0009860b0) (0xc000adc000) Stream removed, broadcasting: 5\n" Feb 24 00:31:09.882: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 24 00:31:09.882: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 24 00:31:09.889: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 24 00:31:09.889: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 24 00:31:09.889: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 24 00:31:09.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 24 00:31:10.195: INFO: stderr: "I0224 00:31:10.030801 1788 log.go:172] (0xc0005ce840) (0xc00095a1e0) Create stream\nI0224 00:31:10.030959 1788 log.go:172] (0xc0005ce840) (0xc00095a1e0) Stream added, broadcasting: 1\nI0224 00:31:10.034526 1788 log.go:172] (0xc0005ce840) Reply frame received for 1\nI0224 00:31:10.034637 1788 log.go:172] (0xc0005ce840) (0xc00095a280) Create stream\nI0224 00:31:10.034644 1788 log.go:172] (0xc0005ce840) (0xc00095a280) Stream added, broadcasting: 3\nI0224 00:31:10.036173 1788 log.go:172] (0xc0005ce840) Reply frame received for 3\nI0224 00:31:10.036193 1788 log.go:172] (0xc0005ce840) (0xc0005d6820) Create stream\nI0224 00:31:10.036198 1788 log.go:172] (0xc0005ce840) (0xc0005d6820) Stream added, broadcasting: 5\nI0224 00:31:10.037554 1788 log.go:172] (0xc0005ce840) Reply frame received for 5\nI0224 00:31:10.111231 1788 log.go:172] (0xc0005ce840) Data frame received for 5\nI0224 00:31:10.111324 1788 log.go:172] (0xc0005d6820) (5) Data frame handling\nI0224 00:31:10.111344 1788 log.go:172] (0xc0005d6820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 00:31:10.111363 1788 log.go:172] (0xc0005ce840) Data frame received for 3\nI0224 00:31:10.111369 1788 log.go:172] (0xc00095a280) (3) Data frame handling\nI0224 00:31:10.111381 1788 log.go:172] (0xc00095a280) (3) Data frame sent\nI0224 00:31:10.184872 1788 log.go:172] (0xc0005ce840) (0xc0005d6820) Stream removed, broadcasting: 5\nI0224 00:31:10.184982 1788 log.go:172] (0xc0005ce840) Data frame received for 1\nI0224 00:31:10.184995 1788 log.go:172] (0xc0005ce840) (0xc00095a280) Stream removed, broadcasting: 3\nI0224 00:31:10.185014 1788 log.go:172] (0xc00095a1e0) (1) Data frame handling\nI0224 00:31:10.185023 1788 log.go:172] (0xc00095a1e0) (1) Data frame sent\nI0224 00:31:10.185029 1788 log.go:172] (0xc0005ce840) (0xc00095a1e0) Stream removed, broadcasting: 1\nI0224 00:31:10.185054 1788 log.go:172] (0xc0005ce840) Go away received\nI0224 00:31:10.185893 1788 log.go:172] (0xc0005ce840) (0xc00095a1e0) Stream removed, broadcasting: 1\nI0224 00:31:10.185906 1788 log.go:172] (0xc0005ce840) (0xc00095a280) Stream removed, broadcasting: 3\nI0224 00:31:10.185911 1788 log.go:172] (0xc0005ce840) (0xc0005d6820) Stream removed, broadcasting: 5\n" Feb 24 00:31:10.195: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 24 00:31:10.195: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 24 00:31:10.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 24 00:31:10.571: INFO: stderr: "I0224 00:31:10.340528 1807 log.go:172] (0xc0000f4dc0) (0xc000481400) Create stream\nI0224 00:31:10.340637 1807 log.go:172] (0xc0000f4dc0) (0xc000481400) Stream added, broadcasting: 1\nI0224 00:31:10.343358 1807 log.go:172] (0xc0000f4dc0) Reply frame received for 1\nI0224 00:31:10.343461 1807 log.go:172] (0xc0000f4dc0) (0xc000659ae0) Create stream\nI0224 00:31:10.343473 1807 log.go:172] (0xc0000f4dc0) (0xc000659ae0) Stream added, broadcasting: 3\nI0224 00:31:10.344455 1807 log.go:172] (0xc0000f4dc0) Reply frame received for 3\nI0224 00:31:10.344472 1807 log.go:172] (0xc0000f4dc0) (0xc0009ec000) Create stream\nI0224 00:31:10.344477 1807 log.go:172] (0xc0000f4dc0) (0xc0009ec000) Stream added, broadcasting: 5\nI0224 00:31:10.345226 1807 log.go:172] (0xc0000f4dc0) Reply frame received for 5\nI0224 00:31:10.398517 1807 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0224 00:31:10.398574 1807 log.go:172] (0xc0009ec000) (5) Data frame handling\nI0224 00:31:10.398590 1807 log.go:172] (0xc0009ec000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 00:31:10.446985 1807 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0224 00:31:10.447056 1807 log.go:172] (0xc000659ae0) (3) Data frame handling\nI0224 00:31:10.447134 1807 log.go:172] (0xc000659ae0) (3) Data frame sent\nI0224 00:31:10.553298 1807 log.go:172] (0xc0000f4dc0) (0xc000659ae0) Stream removed, broadcasting: 3\nI0224 00:31:10.553418 1807 log.go:172] (0xc0000f4dc0) Data frame received for 1\nI0224 00:31:10.553431 1807 log.go:172] (0xc000481400) (1) Data frame handling\nI0224 00:31:10.553441 1807 log.go:172] (0xc000481400) (1) Data frame sent\nI0224 00:31:10.553516 1807 log.go:172] (0xc0000f4dc0) (0xc000481400) Stream removed, broadcasting: 1\nI0224 00:31:10.554591 1807 log.go:172] (0xc0000f4dc0) (0xc0009ec000) Stream removed, broadcasting: 5\nI0224 00:31:10.554628 1807 log.go:172] (0xc0000f4dc0) Go away received\nI0224 00:31:10.554924 1807 log.go:172] (0xc0000f4dc0) (0xc000481400) Stream removed, broadcasting: 1\nI0224 00:31:10.555882 1807 log.go:172] (0xc0000f4dc0) (0xc000659ae0) Stream removed, broadcasting: 3\nI0224 00:31:10.555895 1807 log.go:172] (0xc0000f4dc0) (0xc0009ec000) Stream removed, broadcasting: 5\n" Feb 24 00:31:10.572: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 24 00:31:10.572: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 24 00:31:10.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 24 00:31:10.966: INFO: stderr: "I0224 00:31:10.793873 1827 log.go:172] (0xc000a94000) (0xc000bac0a0) Create stream\nI0224 00:31:10.794045 1827 log.go:172] (0xc000a94000) (0xc000bac0a0) Stream added, broadcasting: 1\nI0224 00:31:10.800724 1827 log.go:172] (0xc000a94000) Reply frame received for 1\nI0224 00:31:10.800801 1827 log.go:172] (0xc000a94000) (0xc000b8e0a0) Create stream\nI0224 00:31:10.800819 1827 log.go:172] (0xc000a94000) (0xc000b8e0a0) Stream added, broadcasting: 3\nI0224 00:31:10.802458 1827 log.go:172] (0xc000a94000) Reply frame received for 3\nI0224 00:31:10.802490 1827 log.go:172] (0xc000a94000) (0xc000a801e0) Create stream\nI0224 00:31:10.802497 1827 log.go:172] (0xc000a94000) (0xc000a801e0) Stream added, broadcasting: 5\nI0224 00:31:10.803914 1827 log.go:172] (0xc000a94000) Reply frame received for 5\nI0224 00:31:10.870636 1827 log.go:172] (0xc000a94000) Data frame received for 5\nI0224 00:31:10.870698 1827 log.go:172] (0xc000a801e0) (5) Data frame handling\nI0224 00:31:10.870714 1827 log.go:172] (0xc000a801e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 00:31:10.896623 1827 log.go:172] (0xc000a94000) Data frame received for 3\nI0224 00:31:10.896657 1827 log.go:172] (0xc000b8e0a0) (3) Data frame handling\nI0224 00:31:10.896676 1827 log.go:172] (0xc000b8e0a0) (3) Data frame sent\nI0224 00:31:10.955547 1827 log.go:172] (0xc000a94000) Data frame received for 1\nI0224 00:31:10.956200 1827 log.go:172] (0xc000bac0a0) (1) Data frame handling\nI0224 00:31:10.956263 1827 log.go:172] (0xc000bac0a0) (1) Data frame sent\nI0224 00:31:10.956319 1827 log.go:172] (0xc000a94000) (0xc000bac0a0) Stream removed, broadcasting: 1\nI0224 00:31:10.958460 1827 log.go:172] (0xc000a94000) (0xc000b8e0a0) Stream removed, broadcasting: 3\nI0224 00:31:10.958661 1827 log.go:172] (0xc000a94000) (0xc000a801e0) Stream removed, broadcasting: 5\nI0224 00:31:10.958742 1827 log.go:172] (0xc000a94000) (0xc000bac0a0) Stream removed, broadcasting: 1\nI0224 00:31:10.958763 1827 log.go:172] (0xc000a94000) (0xc000b8e0a0) Stream removed, broadcasting: 3\nI0224 00:31:10.958851 1827 log.go:172] (0xc000a94000) (0xc000a801e0) Stream removed, broadcasting: 5\nI0224 00:31:10.958938 1827 log.go:172] (0xc000a94000) Go away received\n" Feb 24 00:31:10.966: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 24 00:31:10.966: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 24 00:31:10.966: INFO: Waiting for statefulset status.replicas updated to 0 Feb 24 00:31:10.971: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 24 00:31:20.988: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 24 00:31:20.989: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 24 00:31:20.989: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 24 00:31:21.044: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 00:31:21.044: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC }] Feb 24 00:31:21.044: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:21.044: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:21.044: INFO: Feb 24 00:31:21.044: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 00:31:23.738: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 00:31:23.738: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC }] Feb 24 00:31:23.739: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:23.739: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:23.739: INFO: Feb 24 00:31:23.739: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 00:31:24.748: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 00:31:24.748: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC }] Feb 24 00:31:24.748: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:24.748: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:24.748: INFO: Feb 24 00:31:24.748: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 00:31:26.071: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 00:31:26.071: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC }] Feb 24 00:31:26.071: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:26.071: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:26.071: INFO: Feb 24 00:31:26.071: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 00:31:28.318: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 00:31:28.318: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC }] Feb 24 00:31:28.318: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:28.318: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:28.318: INFO: Feb 24 00:31:28.318: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 00:31:29.431: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 00:31:29.431: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC }] Feb 24 00:31:29.431: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:29.431: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:29.431: INFO: Feb 24 00:31:29.431: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 24 00:31:30.443: INFO: POD NODE PHASE GRACE CONDITIONS Feb 24 00:31:30.443: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:13 +0000 UTC }] Feb 24 00:31:30.443: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:30.443: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:31:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-24 00:30:46 +0000 UTC }] Feb 24 00:31:30.443: INFO: Feb 24 00:31:30.443: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4342 Feb 24 00:31:31.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:31:31.691: INFO: rc: 1 Feb 24 00:31:31.692: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 24 00:31:41.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:31:41.913: INFO: rc: 1 Feb 24 00:31:41.913: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 24 00:31:51.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:31:52.092: INFO: rc: 1 Feb 24 00:31:52.092: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:32:02.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:32:02.233: INFO: rc: 1 Feb 24 00:32:02.233: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:32:12.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:32:12.344: INFO: rc: 1 Feb 24 00:32:12.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:32:22.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:32:22.521: INFO: rc: 1 Feb 24 00:32:22.521: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:32:32.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:32:32.689: INFO: rc: 1 Feb 24 00:32:32.689: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:32:42.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:32:42.817: INFO: rc: 1 Feb 24 00:32:42.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:32:52.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:32:53.006: INFO: rc: 1 Feb 24 00:32:53.007: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:33:03.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:33:03.177: INFO: rc: 1 Feb 24 00:33:03.178: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:33:13.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:33:13.372: INFO: rc: 1 Feb 24 00:33:13.372: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:33:23.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:33:23.524: INFO: rc: 1 Feb 24 00:33:23.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:33:33.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:33:33.708: INFO: rc: 1 Feb 24 00:33:33.708: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:33:43.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:33:43.871: INFO: rc: 1 Feb 24 00:33:43.871: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:33:53.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:33:54.085: INFO: rc: 1 Feb 24 00:33:54.086: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:34:04.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:34:04.183: INFO: rc: 1 Feb 24 00:34:04.183: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:34:14.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:34:14.332: INFO: rc: 1 Feb 24 00:34:14.333: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:34:24.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:34:24.527: INFO: rc: 1 Feb 24 00:34:24.527: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:34:34.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:34:34.678: INFO: rc: 1 Feb 24 00:34:34.678: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:34:44.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:34:44.915: INFO: rc: 1 Feb 24 00:34:44.915: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:34:54.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:34:55.132: INFO: rc: 1 Feb 24 00:34:55.132: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:35:05.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:35:05.255: INFO: rc: 1 Feb 24 00:35:05.255: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:35:15.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:35:15.398: INFO: rc: 1 Feb 24 00:35:15.398: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:35:25.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:35:25.519: INFO: rc: 1 Feb 24 00:35:25.519: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:35:35.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:35:35.689: INFO: rc: 1 Feb 24 00:35:35.690: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:35:45.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:35:45.835: INFO: rc: 1 Feb 24 00:35:45.836: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:35:55.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:35:56.013: INFO: rc: 1 Feb 24 00:35:56.013: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:36:06.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:36:06.249: INFO: rc: 1 Feb 24 00:36:06.250: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:36:16.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:36:16.379: INFO: rc: 1 Feb 24 00:36:16.379: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:36:26.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:36:26.533: INFO: rc: 1 Feb 24 00:36:26.533: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 24 00:36:36.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4342 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 24 00:36:36.684: INFO: rc: 1 Feb 24 00:36:36.685: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Feb 24 00:36:36.685: INFO: Scaling statefulset ss to 0 Feb 24 00:36:36.697: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 24 00:36:36.699: INFO: Deleting all statefulset in ns statefulset-4342 Feb 24 00:36:36.701: INFO: Scaling statefulset ss to 0 Feb 24 00:36:36.718: INFO: Waiting for statefulset status.replicas updated to 0 Feb 24 00:36:36.721: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:36:36.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4342" for this suite. • [SLOW TEST:383.704 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":120,"skipped":2068,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:36:36.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0224 00:36:37.718416 10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 24 00:36:37.718: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:36:37.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2735" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":121,"skipped":2075,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:36:37.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 24 00:36:37.823: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Feb 24 00:36:40.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-602 create -f -' Feb 24 00:36:44.140: INFO: stderr: "" Feb 24 00:36:44.140: INFO: stdout: "e2e-test-crd-publish-openapi-6650-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 24 00:36:44.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-602 delete e2e-test-crd-publish-openapi-6650-crds test-foo' Feb 24 00:36:44.276: INFO: stderr: "" Feb 24 00:36:44.276: INFO: stdout: "e2e-test-crd-publish-openapi-6650-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Feb 24 00:36:44.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-602 apply -f -' Feb 24 00:36:44.627: INFO: stderr: "" Feb 24 00:36:44.627: INFO: stdout: "e2e-test-crd-publish-openapi-6650-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 24 00:36:44.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-602 delete e2e-test-crd-publish-openapi-6650-crds test-foo' Feb 24 00:36:44.768: INFO: stderr: "" Feb 24 00:36:44.768: INFO: stdout: "e2e-test-crd-publish-openapi-6650-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Feb 24 00:36:44.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-602 create -f -' Feb 24 00:36:45.201: INFO: rc: 1 Feb 24 00:36:45.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-602 apply -f -' Feb 24 00:36:45.655: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Feb 24 00:36:45.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-602 create -f -' Feb 24 00:36:45.987: INFO: rc: 1 Feb 24 00:36:45.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-602 apply -f -' Feb 24 00:36:46.274: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Feb 24 00:36:46.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6650-crds' Feb 24 00:36:46.530: INFO: stderr: "" Feb 24 00:36:46.531: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6650-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Feb 24 00:36:46.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6650-crds.metadata' Feb 24 00:36:46.830: INFO: stderr: "" Feb 24 00:36:46.830: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6650-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Feb 24 00:36:46.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6650-crds.spec' Feb 24 00:36:47.121: INFO: stderr: "" Feb 24 00:36:47.121: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6650-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Feb 24 00:36:47.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6650-crds.spec.bars' Feb 24 00:36:47.475: INFO: stderr: "" Feb 24 00:36:47.475: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6650-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Feb 24 00:36:47.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6650-crds.spec.bars2' Feb 24 00:36:47.787: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:36:51.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-602" for this suite. • [SLOW TEST:13.703 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":122,"skipped":2097,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:36:51.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8884 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8884 STEP: creating replication controller externalsvc in namespace services-8884 I0224 00:36:51.627101 10 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8884, replica count: 2 I0224 00:36:54.678042 10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:36:57.678704 10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:37:00.679107 10 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0224 00:37:03.680173 10 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Feb 24 00:37:03.729: INFO: Creating new exec pod Feb 24 00:37:11.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8884 execpodlsnp7 -- /bin/sh -x -c nslookup clusterip-service' Feb 24 00:37:12.279: INFO: stderr: "I0224 00:37:12.020685 2707 log.go:172] (0xc00096a580) (0xc0004808c0) Create stream\nI0224 00:37:12.020786 2707 log.go:172] (0xc00096a580) (0xc0004808c0) Stream added, broadcasting: 1\nI0224 00:37:12.025159 2707 log.go:172] (0xc00096a580) Reply frame received for 1\nI0224 00:37:12.025280 2707 log.go:172] (0xc00096a580) (0xc0008ee000) Create stream\nI0224 00:37:12.025294 2707 log.go:172] (0xc00096a580) (0xc0008ee000) Stream added, broadcasting: 3\nI0224 00:37:12.026923 2707 log.go:172] (0xc00096a580) Reply frame received for 3\nI0224 00:37:12.026943 2707 log.go:172] (0xc00096a580) (0xc0008ee0a0) Create stream\nI0224 00:37:12.026949 2707 log.go:172] (0xc00096a580) (0xc0008ee0a0) Stream added, broadcasting: 5\nI0224 00:37:12.028347 2707 log.go:172] (0xc00096a580) Reply frame received for 5\nI0224 00:37:12.159196 2707 log.go:172] (0xc00096a580) Data frame received for 5\nI0224 00:37:12.159253 2707 log.go:172] (0xc0008ee0a0) (5) Data frame handling\nI0224 00:37:12.159268 2707 log.go:172] (0xc0008ee0a0) (5) Data frame sent\nI0224 00:37:12.159274 2707 log.go:172] (0xc00096a580) Data frame received for 5\nI0224 00:37:12.159279 2707 log.go:172] (0xc0008ee0a0) (5) Data frame handling\n+ nslookup clusterip-service\nI0224 00:37:12.159326 2707 log.go:172] (0xc0008ee0a0) (5) Data frame sent\nI0224 00:37:12.180128 2707 log.go:172] (0xc00096a580) Data frame received for 3\nI0224 00:37:12.180440 2707 log.go:172] (0xc0008ee000) (3) Data frame handling\nI0224 00:37:12.180460 2707 log.go:172] (0xc0008ee000) (3) Data frame sent\nI0224 00:37:12.182786 2707 log.go:172] (0xc00096a580) Data frame received for 3\nI0224 00:37:12.182841 2707 log.go:172] (0xc0008ee000) (3) Data frame handling\nI0224 00:37:12.182870 2707 log.go:172] (0xc0008ee000) (3) Data frame sent\nI0224 00:37:12.271423 2707 log.go:172] (0xc00096a580) (0xc0008ee000) Stream removed, broadcasting: 3\nI0224 00:37:12.271558 2707 log.go:172] (0xc00096a580) Data frame received for 1\nI0224 00:37:12.271571 2707 log.go:172] (0xc0004808c0) (1) Data frame handling\nI0224 00:37:12.271594 2707 log.go:172] (0xc0004808c0) (1) Data frame sent\nI0224 00:37:12.271790 2707 log.go:172] (0xc00096a580) (0xc0004808c0) Stream removed, broadcasting: 1\nI0224 00:37:12.272475 2707 log.go:172] (0xc00096a580) (0xc0008ee0a0) Stream removed, broadcasting: 5\nI0224 00:37:12.272501 2707 log.go:172] (0xc00096a580) Go away received\nI0224 00:37:12.272526 2707 log.go:172] (0xc00096a580) (0xc0004808c0) Stream removed, broadcasting: 1\nI0224 00:37:12.272539 2707 log.go:172] (0xc00096a580) (0xc0008ee000) Stream removed, broadcasting: 3\nI0224 00:37:12.272545 2707 log.go:172] (0xc00096a580) (0xc0008ee0a0) Stream removed, broadcasting: 5\n" Feb 24 00:37:12.279: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8884.svc.cluster.local\tcanonical name = externalsvc.services-8884.svc.cluster.local.\nName:\texternalsvc.services-8884.svc.cluster.local\nAddress: 10.96.22.163\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8884, will wait for the garbage collector to delete the pods Feb 24 00:37:12.344: INFO: Deleting ReplicationController externalsvc took: 9.144722ms Feb 24 00:37:12.645: INFO: Terminating ReplicationController externalsvc pods took: 300.713299ms Feb 24 00:37:33.193: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:37:33.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8884" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:41.810 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":123,"skipped":2116,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:37:33.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-771f2577-afba-4b98-8540-9e6cc09a617b STEP: Creating a pod to test consume secrets Feb 24 00:37:33.337: INFO: Waiting up to 5m0s for pod "pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5" in namespace "secrets-7300" to be "success or failure" Feb 24 00:37:33.388: INFO: Pod "pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.571678ms Feb 24 00:37:35.395: INFO: Pod "pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057721896s Feb 24 00:37:37.402: INFO: Pod "pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065003851s Feb 24 00:37:39.415: INFO: Pod "pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077609086s Feb 24 00:37:41.422: INFO: Pod "pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084377438s Feb 24 00:37:43.429: INFO: Pod "pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091716611s STEP: Saw pod success Feb 24 00:37:43.429: INFO: Pod "pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5" satisfied condition "success or failure" Feb 24 00:37:43.433: INFO: Trying to get logs from node jerma-node pod pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5 container secret-volume-test: STEP: delete the pod Feb 24 00:37:43.553: INFO: Waiting for pod pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5 to disappear Feb 24 00:37:43.561: INFO: Pod pod-secrets-cef7bd68-1d01-4226-847d-66513a4093a5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:37:43.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7300" for this suite. • [SLOW TEST:10.317 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":124,"skipped":2121,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:37:43.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Feb 24 00:37:54.253: INFO: Successfully updated pod "adopt-release-729fp" STEP: Checking that the Job readopts the Pod Feb 24 00:37:54.253: INFO: Waiting up to 15m0s for pod "adopt-release-729fp" in namespace "job-1337" to be "adopted" Feb 24 00:37:54.293: INFO: Pod "adopt-release-729fp": Phase="Running", Reason="", readiness=true. Elapsed: 40.500707ms Feb 24 00:37:56.305: INFO: Pod "adopt-release-729fp": Phase="Running", Reason="", readiness=true. Elapsed: 2.052046955s Feb 24 00:37:56.306: INFO: Pod "adopt-release-729fp" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Feb 24 00:37:56.826: INFO: Successfully updated pod "adopt-release-729fp" STEP: Checking that the Job releases the Pod Feb 24 00:37:56.826: INFO: Waiting up to 15m0s for pod "adopt-release-729fp" in namespace "job-1337" to be "released" Feb 24 00:37:56.843: INFO: Pod "adopt-release-729fp": Phase="Running", Reason="", readiness=true. Elapsed: 16.753927ms Feb 24 00:37:56.843: INFO: Pod "adopt-release-729fp" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:37:56.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1337" for this suite. • [SLOW TEST:13.296 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":125,"skipped":2182,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:37:56.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-1787 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 24 00:37:57.022: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 24 00:37:57.126: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:37:59.878: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:38:02.138: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:38:03.134: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:38:05.519: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:38:07.506: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 24 00:38:09.135: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:38:11.959: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:38:13.135: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:38:15.133: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:38:17.135: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:38:19.133: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:38:21.135: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:38:23.133: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:38:25.144: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 24 00:38:27.133: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 24 00:38:27.137: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 24 00:38:39.300: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.3:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1787 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 24 00:38:39.300: INFO: >>> kubeConfig: /root/.kube/config I0224 00:38:39.355480 10 log.go:172] (0xc004c10c60) (0xc001570780) Create stream I0224 00:38:39.355688 10 log.go:172] (0xc004c10c60) (0xc001570780) Stream added, broadcasting: 1 I0224 00:38:39.362915 10 log.go:172] (0xc004c10c60) Reply frame received for 1 I0224 00:38:39.363039 10 log.go:172] (0xc004c10c60) (0xc001570960) Create stream I0224 00:38:39.363058 10 log.go:172] (0xc004c10c60) (0xc001570960) Stream added, broadcasting: 3 I0224 00:38:39.364816 10 log.go:172] (0xc004c10c60) Reply frame received for 3 I0224 00:38:39.364879 10 log.go:172] (0xc004c10c60) (0xc0029960a0) Create stream I0224 00:38:39.364900 10 log.go:172] (0xc004c10c60) (0xc0029960a0) Stream added, broadcasting: 5 I0224 00:38:39.367265 10 log.go:172] (0xc004c10c60) Reply frame received for 5 I0224 00:38:39.464910 10 log.go:172] (0xc004c10c60) Data frame received for 3 I0224 00:38:39.464957 10 log.go:172] (0xc001570960) (3) Data frame handling I0224 00:38:39.464971 10 log.go:172] (0xc001570960) (3) Data frame sent I0224 00:38:39.537268 10 log.go:172] (0xc004c10c60) Data frame received for 1 I0224 00:38:39.537375 10 log.go:172] (0xc004c10c60) (0xc001570960) Stream removed, broadcasting: 3 I0224 00:38:39.537500 10 log.go:172] (0xc001570780) (1) Data frame handling I0224 00:38:39.537527 10 log.go:172] (0xc001570780) (1) Data frame sent I0224 00:38:39.537563 10 log.go:172] (0xc004c10c60) (0xc0029960a0) Stream removed, broadcasting: 5 I0224 00:38:39.537598 10 log.go:172] (0xc004c10c60) (0xc001570780) Stream removed, broadcasting: 1 I0224 00:38:39.537619 10 log.go:172] (0xc004c10c60) Go away received I0224 00:38:39.538232 10 log.go:172] (0xc004c10c60) (0xc001570780) Stream removed, broadcasting: 1 I0224 00:38:39.538359 10 log.go:172] (0xc004c10c60) (0xc001570960) Stream removed, broadcasting: 3 I0224 00:38:39.538380 10 log.go:172] (0xc004c10c60) (0xc0029960a0) Stream removed, broadcasting: 5 Feb 24 00:38:39.538: INFO: Found all expected endpoints: [netserver-0] Feb 24 00:38:39.543: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1787 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 24 00:38:39.543: INFO: >>> kubeConfig: /root/.kube/config I0224 00:38:39.576237 10 log.go:172] (0xc004c11290) (0xc001570e60) Create stream I0224 00:38:39.576384 10 log.go:172] (0xc004c11290) (0xc001570e60) Stream added, broadcasting: 1 I0224 00:38:39.580681 10 log.go:172] (0xc004c11290) Reply frame received for 1 I0224 00:38:39.580707 10 log.go:172] (0xc004c11290) (0xc001ad4640) Create stream I0224 00:38:39.580713 10 log.go:172] (0xc004c11290) (0xc001ad4640) Stream added, broadcasting: 3 I0224 00:38:39.581541 10 log.go:172] (0xc004c11290) Reply frame received for 3 I0224 00:38:39.581559 10 log.go:172] (0xc004c11290) (0xc001ad4780) Create stream I0224 00:38:39.581565 10 log.go:172] (0xc004c11290) (0xc001ad4780) Stream added, broadcasting: 5 I0224 00:38:39.582535 10 log.go:172] (0xc004c11290) Reply frame received for 5 I0224 00:38:39.649105 10 log.go:172] (0xc004c11290) Data frame received for 3 I0224 00:38:39.649183 10 log.go:172] (0xc001ad4640) (3) Data frame handling I0224 00:38:39.649204 10 log.go:172] (0xc001ad4640) (3) Data frame sent I0224 00:38:39.753862 10 log.go:172] (0xc004c11290) (0xc001ad4640) Stream removed, broadcasting: 3 I0224 00:38:39.754061 10 log.go:172] (0xc004c11290) Data frame received for 1 I0224 00:38:39.754092 10 log.go:172] (0xc004c11290) (0xc001ad4780) Stream removed, broadcasting: 5 I0224 00:38:39.754225 10 log.go:172] (0xc001570e60) (1) Data frame handling I0224 00:38:39.754257 10 log.go:172] (0xc001570e60) (1) Data frame sent I0224 00:38:39.754278 10 log.go:172] (0xc004c11290) (0xc001570e60) Stream removed, broadcasting: 1 I0224 00:38:39.754309 10 log.go:172] (0xc004c11290) Go away received I0224 00:38:39.755445 10 log.go:172] (0xc004c11290) (0xc001570e60) Stream removed, broadcasting: 1 I0224 00:38:39.755528 10 log.go:172] (0xc004c11290) (0xc001ad4640) Stream removed, broadcasting: 3 I0224 00:38:39.755539 10 log.go:172] (0xc004c11290) (0xc001ad4780) Stream removed, broadcasting: 5 Feb 24 00:38:39.755: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:38:39.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1787" for this suite. • [SLOW TEST:42.888 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":126,"skipped":2184,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:38:39.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-78efed74-b0c7-42c1-927f-6d3e5f597493 STEP: Creating configMap with name cm-test-opt-upd-dee1f433-6a75-4b49-9cb9-1e159e24e496 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-78efed74-b0c7-42c1-927f-6d3e5f597493 STEP: Updating configmap cm-test-opt-upd-dee1f433-6a75-4b49-9cb9-1e159e24e496 STEP: Creating configMap with name cm-test-opt-create-82bf2139-1d00-48d3-bb36-5d1c08b15188 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:39:00.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6322" for this suite. • [SLOW TEST:20.550 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":127,"skipped":2208,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:39:00.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 24 00:39:00.542: INFO: Waiting up to 5m0s for pod "pod-2c07f2b3-0787-42b7-9b7b-33be338926f9" in namespace "emptydir-9335" to be "success or failure" Feb 24 00:39:00.627: INFO: Pod "pod-2c07f2b3-0787-42b7-9b7b-33be338926f9": Phase="Pending", Reason="", readiness=false. Elapsed: 84.36678ms Feb 24 00:39:02.632: INFO: Pod "pod-2c07f2b3-0787-42b7-9b7b-33be338926f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089835078s Feb 24 00:39:04.639: INFO: Pod "pod-2c07f2b3-0787-42b7-9b7b-33be338926f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096302865s Feb 24 00:39:07.056: INFO: Pod "pod-2c07f2b3-0787-42b7-9b7b-33be338926f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.514162415s Feb 24 00:39:09.070: INFO: Pod "pod-2c07f2b3-0787-42b7-9b7b-33be338926f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.527214437s Feb 24 00:39:11.385: INFO: Pod "pod-2c07f2b3-0787-42b7-9b7b-33be338926f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.842493125s STEP: Saw pod success Feb 24 00:39:11.385: INFO: Pod "pod-2c07f2b3-0787-42b7-9b7b-33be338926f9" satisfied condition "success or failure" Feb 24 00:39:11.838: INFO: Trying to get logs from node jerma-node pod pod-2c07f2b3-0787-42b7-9b7b-33be338926f9 container test-container: STEP: delete the pod Feb 24 00:39:12.208: INFO: Waiting for pod pod-2c07f2b3-0787-42b7-9b7b-33be338926f9 to disappear Feb 24 00:39:12.225: INFO: Pod pod-2c07f2b3-0787-42b7-9b7b-33be338926f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:39:12.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9335" for this suite. • [SLOW TEST:11.923 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":128,"skipped":2251,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:39:12.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 24 00:39:12.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7" in namespace "projected-5119" to be "success or failure" Feb 24 00:39:12.456: INFO: Pod "downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7": Phase="Pending", Reason="", readiness=false. Elapsed: 60.470841ms Feb 24 00:39:14.465: INFO: Pod "downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069399197s Feb 24 00:39:16.482: INFO: Pod "downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086022069s Feb 24 00:39:18.496: INFO: Pod "downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099598874s Feb 24 00:39:20.511: INFO: Pod "downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.114695769s STEP: Saw pod success Feb 24 00:39:20.511: INFO: Pod "downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7" satisfied condition "success or failure" Feb 24 00:39:20.519: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7 container client-container: STEP: delete the pod Feb 24 00:39:20.797: INFO: Waiting for pod downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7 to disappear Feb 24 00:39:20.808: INFO: Pod downwardapi-volume-75d99b27-4409-41d0-adb2-da708fc2ceb7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:39:20.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5119" for this suite. • [SLOW TEST:8.664 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":129,"skipped":2251,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:39:20.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:39:31.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6860" for this suite. • [SLOW TEST:10.230 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":130,"skipped":2282,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:39:31.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 24 00:39:31.314: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 24 00:39:31.336: INFO: Waiting for terminating namespaces to be deleted... Feb 24 00:39:31.341: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 24 00:39:31.352: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 24 00:39:31.352: INFO: Container kube-proxy ready: true, restart count 0 Feb 24 00:39:31.352: INFO: busybox-readonly-fsaa4c2a0d-733a-4cf3-92d3-8d4daf14ade1 from kubelet-test-6860 started at 2020-02-24 00:39:21 +0000 UTC (1 container statuses recorded) Feb 24 00:39:31.352: INFO: Container busybox-readonly-fsaa4c2a0d-733a-4cf3-92d3-8d4daf14ade1 ready: true, restart count 0 Feb 24 00:39:31.352: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 24 00:39:31.352: INFO: Container weave ready: true, restart count 1 Feb 24 00:39:31.352: INFO: Container weave-npc ready: true, restart count 0 Feb 24 00:39:31.352: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 24 00:39:31.363: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 24 00:39:31.363: INFO: Container kube-apiserver ready: true, restart count 1 Feb 24 00:39:31.363: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 24 00:39:31.363: INFO: Container etcd ready: true, restart count 1 Feb 24 00:39:31.363: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 24 00:39:31.363: INFO: Container coredns ready: true, restart count 0 Feb 24 00:39:31.363: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 24 00:39:31.363: INFO: Container coredns ready: true, restart count 0 Feb 24 00:39:31.363: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 24 00:39:31.363: INFO: Container kube-controller-manager ready: true, restart count 17 Feb 24 00:39:31.363: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 24 00:39:31.363: INFO: Container kube-proxy ready: true, restart count 0 Feb 24 00:39:31.363: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 24 00:39:31.363: INFO: Container weave ready: true, restart count 0 Feb 24 00:39:31.363: INFO: Container weave-npc ready: true, restart count 0 Feb 24 00:39:31.364: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 24 00:39:31.364: INFO: Container kube-scheduler ready: true, restart count 23 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f62fb481f74238], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:39:32.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-774" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":280,"completed":131,"skipped":2308,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:39:32.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 24 00:39:32.849: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 24 00:39:34.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:39:36.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:39:38.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 24 00:39:40.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718101572, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 24 00:39:44.183: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:39:44.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8420" for this suite. STEP: Destroying namespace "webhook-8420-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.159 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":132,"skipped":2310,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:39:44.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 24 00:39:44.729: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:39:45.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9969" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":280,"completed":133,"skipped":2324,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:39:45.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:39:45.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9579" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":134,"skipped":2331,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:39:45.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 24 00:39:46.332: INFO: Number of nodes with available pods: 0 Feb 24 00:39:46.332: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:48.659: INFO: Number of nodes with available pods: 0 Feb 24 00:39:48.659: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:50.187: INFO: Number of nodes with available pods: 0 Feb 24 00:39:50.187: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:50.663: INFO: Number of nodes with available pods: 0 Feb 24 00:39:50.664: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:51.412: INFO: Number of nodes with available pods: 0 Feb 24 00:39:51.412: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:52.367: INFO: Number of nodes with available pods: 0 Feb 24 00:39:52.367: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:54.275: INFO: Number of nodes with available pods: 0 Feb 24 00:39:54.275: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:54.355: INFO: Number of nodes with available pods: 0 Feb 24 00:39:54.356: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:55.448: INFO: Number of nodes with available pods: 0 Feb 24 00:39:55.448: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:56.347: INFO: Number of nodes with available pods: 0 Feb 24 00:39:56.347: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:57.348: INFO: Number of nodes with available pods: 0 Feb 24 00:39:57.348: INFO: Node jerma-node is running more than one daemon pod Feb 24 00:39:58.343: INFO: Number of nodes with available pods: 1 Feb 24 00:39:58.343: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:39:59.346: INFO: Number of nodes with available pods: 2 Feb 24 00:39:59.346: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 24 00:39:59.401: INFO: Number of nodes with available pods: 1 Feb 24 00:39:59.401: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:00.426: INFO: Number of nodes with available pods: 1 Feb 24 00:40:00.426: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:01.440: INFO: Number of nodes with available pods: 1 Feb 24 00:40:01.441: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:02.415: INFO: Number of nodes with available pods: 1 Feb 24 00:40:02.415: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:03.415: INFO: Number of nodes with available pods: 1 Feb 24 00:40:03.415: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:04.415: INFO: Number of nodes with available pods: 1 Feb 24 00:40:04.415: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:05.416: INFO: Number of nodes with available pods: 1 Feb 24 00:40:05.416: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:06.418: INFO: Number of nodes with available pods: 1 Feb 24 00:40:06.418: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:07.422: INFO: Number of nodes with available pods: 1 Feb 24 00:40:07.422: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:08.414: INFO: Number of nodes with available pods: 1 Feb 24 00:40:08.414: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:09.415: INFO: Number of nodes with available pods: 1 Feb 24 00:40:09.415: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:10.417: INFO: Number of nodes with available pods: 1 Feb 24 00:40:10.417: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:11.418: INFO: Number of nodes with available pods: 1 Feb 24 00:40:11.418: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:12.414: INFO: Number of nodes with available pods: 1 Feb 24 00:40:12.414: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:13.428: INFO: Number of nodes with available pods: 1 Feb 24 00:40:13.428: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:15.146: INFO: Number of nodes with available pods: 1 Feb 24 00:40:15.146: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:15.665: INFO: Number of nodes with available pods: 1 Feb 24 00:40:15.665: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:16.428: INFO: Number of nodes with available pods: 1 Feb 24 00:40:16.428: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:17.416: INFO: Number of nodes with available pods: 1 Feb 24 00:40:17.416: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:18.439: INFO: Number of nodes with available pods: 1 Feb 24 00:40:18.440: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:19.645: INFO: Number of nodes with available pods: 1 Feb 24 00:40:19.645: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:20.433: INFO: Number of nodes with available pods: 1 Feb 24 00:40:20.434: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:21.415: INFO: Number of nodes with available pods: 1 Feb 24 00:40:21.415: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 24 00:40:22.413: INFO: Number of nodes with available pods: 2 Feb 24 00:40:22.413: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7643, will wait for the garbage collector to delete the pods Feb 24 00:40:22.503: INFO: Deleting DaemonSet.extensions daemon-set took: 31.355883ms Feb 24 00:40:22.904: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.284717ms Feb 24 00:40:33.211: INFO: Number of nodes with available pods: 0 Feb 24 00:40:33.211: INFO: Number of running nodes: 0, number of available pods: 0 Feb 24 00:40:33.245: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7643/daemonsets","resourceVersion":"10331209"},"items":null} Feb 24 00:40:33.250: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7643/pods","resourceVersion":"10331209"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:40:33.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7643" for this suite. • [SLOW TEST:47.288 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":135,"skipped":2364,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:40:33.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 24 00:40:33.417: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:40:50.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8714" for this suite. • [SLOW TEST:16.935 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":136,"skipped":2370,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:40:50.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-f221ca16-3722-4536-b9b9-542ea1e8aaec STEP: Creating a pod to test consume secrets Feb 24 00:40:50.379: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778" in namespace "projected-1375" to be "success or failure" Feb 24 00:40:50.387: INFO: Pod "pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319172ms Feb 24 00:40:52.393: INFO: Pod "pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013882434s Feb 24 00:40:54.399: INFO: Pod "pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020479933s Feb 24 00:40:56.406: INFO: Pod "pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027372392s Feb 24 00:40:58.412: INFO: Pod "pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.033032311s STEP: Saw pod success Feb 24 00:40:58.412: INFO: Pod "pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778" satisfied condition "success or failure" Feb 24 00:40:58.415: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778 container projected-secret-volume-test: STEP: delete the pod Feb 24 00:40:58.451: INFO: Waiting for pod pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778 to disappear Feb 24 00:40:58.475: INFO: Pod pod-projected-secrets-a39a468c-2154-494e-940e-ead92395a778 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:40:58.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1375" for this suite. • [SLOW TEST:8.278 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":137,"skipped":2377,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:40:58.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-d2b863fd-b525-43d3-aca5-b80d129faaf7 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d2b863fd-b525-43d3-aca5-b80d129faaf7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:42:40.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5553" for this suite. • [SLOW TEST:101.768 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":138,"skipped":2383,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:42:40.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Starting the proxy Feb 24 00:42:40.341: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix737383010/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:42:40.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1716" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":280,"completed":139,"skipped":2405,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:42:40.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:42:56.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6007" for this suite. • [SLOW TEST:16.436 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":140,"skipped":2422,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:42:56.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-ace80135-698a-4695-8c7a-46117ac5e536 STEP: Creating a pod to test consume secrets Feb 24 00:42:57.115: INFO: Waiting up to 5m0s for pod "pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6" in namespace "secrets-3631" to be "success or failure" Feb 24 00:42:57.124: INFO: Pod "pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.079948ms Feb 24 00:42:59.319: INFO: Pod "pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203683888s Feb 24 00:43:01.324: INFO: Pod "pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208683629s Feb 24 00:43:03.330: INFO: Pod "pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214781276s Feb 24 00:43:05.335: INFO: Pod "pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.220004901s STEP: Saw pod success Feb 24 00:43:05.335: INFO: Pod "pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6" satisfied condition "success or failure" Feb 24 00:43:05.338: INFO: Trying to get logs from node jerma-node pod pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6 container secret-env-test: STEP: delete the pod Feb 24 00:43:05.389: INFO: Waiting for pod pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6 to disappear Feb 24 00:43:05.410: INFO: Pod pod-secrets-defe97ff-f15d-431e-a721-48b68ed0afd6 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:43:05.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3631" for this suite. • [SLOW TEST:8.516 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":2435,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:43:05.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating api versions Feb 24 00:43:05.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 24 00:43:05.792: INFO: stderr: "" Feb 24 00:43:05.792: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:43:05.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1869" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":280,"completed":142,"skipped":2474,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:43:05.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-e5cfad7a-6bd9-460b-9c9e-f51f810b53fd STEP: Creating secret with name secret-projected-all-test-volume-22ddc6e6-311d-4643-b353-d73e2f097dde STEP: Creating a pod to test Check all projections for projected volume plugin Feb 24 00:43:06.086: INFO: Waiting up to 5m0s for pod "projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782" in namespace "projected-7109" to be "success or failure" Feb 24 00:43:06.110: INFO: Pod "projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782": Phase="Pending", Reason="", readiness=false. Elapsed: 23.328711ms Feb 24 00:43:08.116: INFO: Pod "projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029438669s Feb 24 00:43:10.121: INFO: Pod "projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034622676s Feb 24 00:43:12.130: INFO: Pod "projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043679486s Feb 24 00:43:14.135: INFO: Pod "projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048601185s Feb 24 00:43:16.143: INFO: Pod "projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056486886s STEP: Saw pod success Feb 24 00:43:16.143: INFO: Pod "projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782" satisfied condition "success or failure" Feb 24 00:43:16.147: INFO: Trying to get logs from node jerma-node pod projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782 container projected-all-volume-test: STEP: delete the pod Feb 24 00:43:16.261: INFO: Waiting for pod projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782 to disappear Feb 24 00:43:16.267: INFO: Pod projected-volume-a038d1eb-5a5d-46cd-a778-2cc856a5d782 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:43:16.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7109" for this suite. • [SLOW TEST:10.465 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":143,"skipped":2509,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:43:16.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 24 00:43:16.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf" in namespace "projected-5805" to be "success or failure" Feb 24 00:43:16.419: INFO: Pod "downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.251862ms Feb 24 00:43:18.426: INFO: Pod "downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023354514s Feb 24 00:43:20.431: INFO: Pod "downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028366895s Feb 24 00:43:22.437: INFO: Pod "downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034421874s Feb 24 00:43:24.442: INFO: Pod "downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039912771s STEP: Saw pod success Feb 24 00:43:24.442: INFO: Pod "downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf" satisfied condition "success or failure" Feb 24 00:43:24.446: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf container client-container: STEP: delete the pod Feb 24 00:43:24.481: INFO: Waiting for pod downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf to disappear Feb 24 00:43:24.487: INFO: Pod downwardapi-volume-175c5031-5807-4438-9fb9-0529e8a8cebf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:43:24.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5805" for this suite. • [SLOW TEST:8.284 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":144,"skipped":2530,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:43:24.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-ecdd9bf3-dde6-4d6a-bbbf-350453e3eb16 STEP: Creating configMap with name cm-test-opt-upd-f121ba2b-64e7-4b37-bf22-f8e9c0958df4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ecdd9bf3-dde6-4d6a-bbbf-350453e3eb16 STEP: Updating configmap cm-test-opt-upd-f121ba2b-64e7-4b37-bf22-f8e9c0958df4 STEP: Creating configMap with name cm-test-opt-create-7e652de1-c0ab-4b51-8be7-34e63de6a5a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 24 00:44:43.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1793" for this suite. • [SLOW TEST:78.669 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":145,"skipped":2570,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 24 00:44:43.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 24 00:44:43.454: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/:
alternatives.log
alternatives.l... (200; 75.272104ms)
Feb 24 00:44:43.462: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.203554ms)
Feb 24 00:44:43.466: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.507485ms)
Feb 24 00:44:43.469: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.169867ms)
Feb 24 00:44:43.474: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.762065ms)
Feb 24 00:44:43.479: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.389039ms)
Feb 24 00:44:43.483: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.791434ms)
Feb 24 00:44:43.488: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.133133ms)
Feb 24 00:44:43.492: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.043936ms)
Feb 24 00:44:43.497: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.026498ms)
Feb 24 00:44:43.502: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.09605ms)
Feb 24 00:44:43.506: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.519787ms)
Feb 24 00:44:43.511: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.014984ms)
Feb 24 00:44:43.516: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.538746ms)
Feb 24 00:44:43.520: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.82421ms)
Feb 24 00:44:43.523: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.03408ms)
Feb 24 00:44:43.527: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.380364ms)
Feb 24 00:44:43.532: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.198925ms)
Feb 24 00:44:43.540: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.721649ms)
Feb 24 00:44:43.546: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.866095ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:44:43.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8895" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":280,"completed":146,"skipped":2575,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:44:43.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:44:43.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8114" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":147,"skipped":2603,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:44:43.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 00:44:43.894: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:44:45.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-905" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":280,"completed":148,"skipped":2618,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:44:45.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 24 00:44:45.447: INFO: Waiting up to 5m0s for pod "pod-ede81cba-50dc-4aa2-8648-6679624c3505" in namespace "emptydir-7920" to be "success or failure"
Feb 24 00:44:45.541: INFO: Pod "pod-ede81cba-50dc-4aa2-8648-6679624c3505": Phase="Pending", Reason="", readiness=false. Elapsed: 94.100317ms
Feb 24 00:44:47.551: INFO: Pod "pod-ede81cba-50dc-4aa2-8648-6679624c3505": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103615739s
Feb 24 00:44:49.609: INFO: Pod "pod-ede81cba-50dc-4aa2-8648-6679624c3505": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162063673s
Feb 24 00:44:51.618: INFO: Pod "pod-ede81cba-50dc-4aa2-8648-6679624c3505": Phase="Pending", Reason="", readiness=false. Elapsed: 6.170977511s
Feb 24 00:44:53.629: INFO: Pod "pod-ede81cba-50dc-4aa2-8648-6679624c3505": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181722346s
Feb 24 00:44:55.636: INFO: Pod "pod-ede81cba-50dc-4aa2-8648-6679624c3505": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.189229365s
STEP: Saw pod success
Feb 24 00:44:55.636: INFO: Pod "pod-ede81cba-50dc-4aa2-8648-6679624c3505" satisfied condition "success or failure"
Feb 24 00:44:55.639: INFO: Trying to get logs from node jerma-node pod pod-ede81cba-50dc-4aa2-8648-6679624c3505 container test-container: 
STEP: delete the pod
Feb 24 00:44:55.671: INFO: Waiting for pod pod-ede81cba-50dc-4aa2-8648-6679624c3505 to disappear
Feb 24 00:44:55.685: INFO: Pod pod-ede81cba-50dc-4aa2-8648-6679624c3505 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:44:55.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7920" for this suite.

• [SLOW TEST:10.326 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":149,"skipped":2627,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:44:55.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with configMap that has name projected-configmap-test-upd-b4cbee2a-a018-4812-9228-b33f40e7b554
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-b4cbee2a-a018-4812-9228-b33f40e7b554
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:46:19.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7004" for this suite.

• [SLOW TEST:83.727 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":150,"skipped":2633,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:46:19.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the initial replication controller
Feb 24 00:46:19.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4278'
Feb 24 00:46:20.056: INFO: stderr: ""
Feb 24 00:46:20.056: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 24 00:46:20.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4278'
Feb 24 00:46:20.275: INFO: stderr: ""
Feb 24 00:46:20.275: INFO: stdout: "update-demo-nautilus-jsknx update-demo-nautilus-r598w "
Feb 24 00:46:20.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsknx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:46:20.361: INFO: stderr: ""
Feb 24 00:46:20.361: INFO: stdout: ""
Feb 24 00:46:20.361: INFO: update-demo-nautilus-jsknx is created but not running
Feb 24 00:46:25.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4278'
Feb 24 00:46:26.754: INFO: stderr: ""
Feb 24 00:46:26.755: INFO: stdout: "update-demo-nautilus-jsknx update-demo-nautilus-r598w "
Feb 24 00:46:26.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsknx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:46:27.142: INFO: stderr: ""
Feb 24 00:46:27.142: INFO: stdout: ""
Feb 24 00:46:27.142: INFO: update-demo-nautilus-jsknx is created but not running
Feb 24 00:46:32.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4278'
Feb 24 00:46:32.642: INFO: stderr: ""
Feb 24 00:46:32.642: INFO: stdout: "update-demo-nautilus-jsknx update-demo-nautilus-r598w "
Feb 24 00:46:32.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsknx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:46:32.809: INFO: stderr: ""
Feb 24 00:46:32.810: INFO: stdout: "true"
Feb 24 00:46:32.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsknx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:46:33.052: INFO: stderr: ""
Feb 24 00:46:33.052: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 00:46:33.052: INFO: validating pod update-demo-nautilus-jsknx
Feb 24 00:46:33.163: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 00:46:33.164: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 00:46:33.164: INFO: update-demo-nautilus-jsknx is verified up and running
Feb 24 00:46:33.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r598w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:46:33.379: INFO: stderr: ""
Feb 24 00:46:33.380: INFO: stdout: ""
Feb 24 00:46:33.380: INFO: update-demo-nautilus-r598w is created but not running
Feb 24 00:46:38.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4278'
Feb 24 00:46:38.522: INFO: stderr: ""
Feb 24 00:46:38.522: INFO: stdout: "update-demo-nautilus-jsknx update-demo-nautilus-r598w "
Feb 24 00:46:38.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsknx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:46:38.702: INFO: stderr: ""
Feb 24 00:46:38.703: INFO: stdout: "true"
Feb 24 00:46:38.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsknx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:46:38.820: INFO: stderr: ""
Feb 24 00:46:38.820: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 00:46:38.820: INFO: validating pod update-demo-nautilus-jsknx
Feb 24 00:46:38.835: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 00:46:38.835: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 00:46:38.835: INFO: update-demo-nautilus-jsknx is verified up and running
Feb 24 00:46:38.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r598w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:46:38.996: INFO: stderr: ""
Feb 24 00:46:38.996: INFO: stdout: "true"
Feb 24 00:46:38.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r598w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:46:39.077: INFO: stderr: ""
Feb 24 00:46:39.078: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 00:46:39.078: INFO: validating pod update-demo-nautilus-r598w
Feb 24 00:46:39.096: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 00:46:39.096: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 00:46:39.096: INFO: update-demo-nautilus-r598w is verified up and running
STEP: rolling-update to new replication controller
Feb 24 00:46:39.099: INFO: scanned /root for discovery docs: 
Feb 24 00:46:39.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4278'
Feb 24 00:47:14.283: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 24 00:47:14.283: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 24 00:47:14.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4278'
Feb 24 00:47:14.420: INFO: stderr: ""
Feb 24 00:47:14.420: INFO: stdout: "update-demo-kitten-kh9ch update-demo-kitten-zs46t "
Feb 24 00:47:14.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kh9ch -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:47:14.528: INFO: stderr: ""
Feb 24 00:47:14.528: INFO: stdout: "true"
Feb 24 00:47:14.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kh9ch -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:47:14.621: INFO: stderr: ""
Feb 24 00:47:14.621: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 24 00:47:14.621: INFO: validating pod update-demo-kitten-kh9ch
Feb 24 00:47:14.637: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 24 00:47:14.637: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 24 00:47:14.637: INFO: update-demo-kitten-kh9ch is verified up and running
Feb 24 00:47:14.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zs46t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:47:14.736: INFO: stderr: ""
Feb 24 00:47:14.737: INFO: stdout: "true"
Feb 24 00:47:14.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zs46t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4278'
Feb 24 00:47:14.832: INFO: stderr: ""
Feb 24 00:47:14.832: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 24 00:47:14.833: INFO: validating pod update-demo-kitten-zs46t
Feb 24 00:47:14.842: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 24 00:47:14.843: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 24 00:47:14.843: INFO: update-demo-kitten-zs46t is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:47:14.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4278" for this suite.

• [SLOW TEST:55.436 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":280,"completed":151,"skipped":2641,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:47:14.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-7a23c389-ff40-495d-ae01-890b5fb9648d
STEP: Creating a pod to test consume configMaps
Feb 24 00:47:14.999: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025" in namespace "projected-4240" to be "success or failure"
Feb 24 00:47:15.012: INFO: Pod "pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025": Phase="Pending", Reason="", readiness=false. Elapsed: 13.421001ms
Feb 24 00:47:17.019: INFO: Pod "pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020004916s
Feb 24 00:47:19.027: INFO: Pod "pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028431564s
Feb 24 00:47:21.469: INFO: Pod "pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470103422s
Feb 24 00:47:24.375: INFO: Pod "pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025": Phase="Pending", Reason="", readiness=false. Elapsed: 9.376238086s
Feb 24 00:47:26.381: INFO: Pod "pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.382497473s
STEP: Saw pod success
Feb 24 00:47:26.382: INFO: Pod "pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025" satisfied condition "success or failure"
Feb 24 00:47:26.384: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 24 00:47:26.495: INFO: Waiting for pod pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025 to disappear
Feb 24 00:47:26.514: INFO: Pod pod-projected-configmaps-705beaed-1ef7-4aae-b0f9-6fdf2e6c9025 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:47:26.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4240" for this suite.

• [SLOW TEST:11.667 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":152,"skipped":2661,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:47:26.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 24 00:47:26.703: INFO: Waiting up to 5m0s for pod "downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762" in namespace "downward-api-7447" to be "success or failure"
Feb 24 00:47:26.716: INFO: Pod "downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762": Phase="Pending", Reason="", readiness=false. Elapsed: 12.188794ms
Feb 24 00:47:28.723: INFO: Pod "downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019985477s
Feb 24 00:47:30.734: INFO: Pod "downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030061888s
Feb 24 00:47:32.741: INFO: Pod "downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037881096s
Feb 24 00:47:34.746: INFO: Pod "downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042937419s
STEP: Saw pod success
Feb 24 00:47:34.747: INFO: Pod "downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762" satisfied condition "success or failure"
Feb 24 00:47:34.749: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762 container client-container: 
STEP: delete the pod
Feb 24 00:47:34.813: INFO: Waiting for pod downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762 to disappear
Feb 24 00:47:34.829: INFO: Pod downwardapi-volume-565ae66d-40e3-4ba0-be6f-bcab1062d762 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:47:34.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7447" for this suite.

• [SLOW TEST:8.326 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":153,"skipped":2671,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:47:34.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 24 00:47:35.094: INFO: Number of nodes with available pods: 0
Feb 24 00:47:35.094: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:47:36.865: INFO: Number of nodes with available pods: 0
Feb 24 00:47:36.865: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:47:37.402: INFO: Number of nodes with available pods: 0
Feb 24 00:47:37.402: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:47:38.519: INFO: Number of nodes with available pods: 0
Feb 24 00:47:38.520: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:47:39.108: INFO: Number of nodes with available pods: 0
Feb 24 00:47:39.108: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:47:40.183: INFO: Number of nodes with available pods: 0
Feb 24 00:47:40.184: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:47:42.763: INFO: Number of nodes with available pods: 0
Feb 24 00:47:42.763: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:47:43.291: INFO: Number of nodes with available pods: 0
Feb 24 00:47:43.291: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:47:44.105: INFO: Number of nodes with available pods: 0
Feb 24 00:47:44.105: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:47:45.142: INFO: Number of nodes with available pods: 1
Feb 24 00:47:45.142: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:47:46.106: INFO: Number of nodes with available pods: 1
Feb 24 00:47:46.106: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:47:47.105: INFO: Number of nodes with available pods: 2
Feb 24 00:47:47.105: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 24 00:47:47.152: INFO: Number of nodes with available pods: 2
Feb 24 00:47:47.152: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5066, will wait for the garbage collector to delete the pods
Feb 24 00:47:48.239: INFO: Deleting DaemonSet.extensions daemon-set took: 11.265719ms
Feb 24 00:47:48.740: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.660756ms
Feb 24 00:47:55.446: INFO: Number of nodes with available pods: 0
Feb 24 00:47:55.446: INFO: Number of running nodes: 0, number of available pods: 0
Feb 24 00:47:55.473: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5066/daemonsets","resourceVersion":"10332823"},"items":null}

Feb 24 00:47:55.478: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5066/pods","resourceVersion":"10332823"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:47:55.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5066" for this suite.

• [SLOW TEST:20.645 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":154,"skipped":2697,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:47:55.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 24 00:47:55.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5743'
Feb 24 00:47:55.943: INFO: stderr: ""
Feb 24 00:47:55.943: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 24 00:47:55.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5743'
Feb 24 00:47:56.143: INFO: stderr: ""
Feb 24 00:47:56.143: INFO: stdout: "update-demo-nautilus-7mcpb update-demo-nautilus-gb5tf "
Feb 24 00:47:56.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mcpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:47:56.318: INFO: stderr: ""
Feb 24 00:47:56.318: INFO: stdout: ""
Feb 24 00:47:56.319: INFO: update-demo-nautilus-7mcpb is created but not running
Feb 24 00:48:01.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5743'
Feb 24 00:48:01.508: INFO: stderr: ""
Feb 24 00:48:01.508: INFO: stdout: "update-demo-nautilus-7mcpb update-demo-nautilus-gb5tf "
Feb 24 00:48:01.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mcpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:02.515: INFO: stderr: ""
Feb 24 00:48:02.515: INFO: stdout: ""
Feb 24 00:48:02.515: INFO: update-demo-nautilus-7mcpb is created but not running
Feb 24 00:48:07.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5743'
Feb 24 00:48:07.689: INFO: stderr: ""
Feb 24 00:48:07.689: INFO: stdout: "update-demo-nautilus-7mcpb update-demo-nautilus-gb5tf "
Feb 24 00:48:07.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mcpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:07.836: INFO: stderr: ""
Feb 24 00:48:07.836: INFO: stdout: "true"
Feb 24 00:48:07.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mcpb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:07.958: INFO: stderr: ""
Feb 24 00:48:07.958: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 00:48:07.958: INFO: validating pod update-demo-nautilus-7mcpb
Feb 24 00:48:07.964: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 00:48:07.964: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 00:48:07.964: INFO: update-demo-nautilus-7mcpb is verified up and running
Feb 24 00:48:07.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb5tf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:08.054: INFO: stderr: ""
Feb 24 00:48:08.054: INFO: stdout: "true"
Feb 24 00:48:08.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb5tf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:08.153: INFO: stderr: ""
Feb 24 00:48:08.153: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 00:48:08.153: INFO: validating pod update-demo-nautilus-gb5tf
Feb 24 00:48:08.161: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 00:48:08.161: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 00:48:08.161: INFO: update-demo-nautilus-gb5tf is verified up and running
STEP: scaling down the replication controller
Feb 24 00:48:08.163: INFO: scanned /root for discovery docs: 
Feb 24 00:48:08.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5743'
Feb 24 00:48:09.383: INFO: stderr: ""
Feb 24 00:48:09.383: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 24 00:48:09.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5743'
Feb 24 00:48:09.540: INFO: stderr: ""
Feb 24 00:48:09.540: INFO: stdout: "update-demo-nautilus-7mcpb update-demo-nautilus-gb5tf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 24 00:48:14.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5743'
Feb 24 00:48:14.657: INFO: stderr: ""
Feb 24 00:48:14.657: INFO: stdout: "update-demo-nautilus-7mcpb update-demo-nautilus-gb5tf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 24 00:48:19.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5743'
Feb 24 00:48:19.872: INFO: stderr: ""
Feb 24 00:48:19.872: INFO: stdout: "update-demo-nautilus-7mcpb update-demo-nautilus-gb5tf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 24 00:48:24.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5743'
Feb 24 00:48:25.004: INFO: stderr: ""
Feb 24 00:48:25.004: INFO: stdout: "update-demo-nautilus-gb5tf "
Feb 24 00:48:25.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb5tf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:25.146: INFO: stderr: ""
Feb 24 00:48:25.146: INFO: stdout: "true"
Feb 24 00:48:25.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb5tf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:25.265: INFO: stderr: ""
Feb 24 00:48:25.266: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 00:48:25.266: INFO: validating pod update-demo-nautilus-gb5tf
Feb 24 00:48:25.271: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 00:48:25.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 00:48:25.271: INFO: update-demo-nautilus-gb5tf is verified up and running
STEP: scaling up the replication controller
Feb 24 00:48:25.273: INFO: scanned /root for discovery docs: 
Feb 24 00:48:25.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5743'
Feb 24 00:48:26.600: INFO: stderr: ""
Feb 24 00:48:26.600: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 24 00:48:26.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5743'
Feb 24 00:48:27.185: INFO: stderr: ""
Feb 24 00:48:27.185: INFO: stdout: "update-demo-nautilus-gb5tf update-demo-nautilus-lsqlp "
Feb 24 00:48:27.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb5tf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:27.613: INFO: stderr: ""
Feb 24 00:48:27.613: INFO: stdout: "true"
Feb 24 00:48:27.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb5tf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:27.732: INFO: stderr: ""
Feb 24 00:48:27.732: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 00:48:27.732: INFO: validating pod update-demo-nautilus-gb5tf
Feb 24 00:48:27.738: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 00:48:27.738: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 00:48:27.738: INFO: update-demo-nautilus-gb5tf is verified up and running
Feb 24 00:48:27.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsqlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:27.891: INFO: stderr: ""
Feb 24 00:48:27.891: INFO: stdout: ""
Feb 24 00:48:27.892: INFO: update-demo-nautilus-lsqlp is created but not running
Feb 24 00:48:32.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5743'
Feb 24 00:48:33.037: INFO: stderr: ""
Feb 24 00:48:33.038: INFO: stdout: "update-demo-nautilus-gb5tf update-demo-nautilus-lsqlp "
Feb 24 00:48:33.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb5tf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:33.184: INFO: stderr: ""
Feb 24 00:48:33.184: INFO: stdout: "true"
Feb 24 00:48:33.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb5tf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:33.312: INFO: stderr: ""
Feb 24 00:48:33.312: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 00:48:33.312: INFO: validating pod update-demo-nautilus-gb5tf
Feb 24 00:48:33.317: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 00:48:33.317: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 00:48:33.317: INFO: update-demo-nautilus-gb5tf is verified up and running
Feb 24 00:48:33.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsqlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:33.525: INFO: stderr: ""
Feb 24 00:48:33.525: INFO: stdout: "true"
Feb 24 00:48:33.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsqlp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5743'
Feb 24 00:48:33.641: INFO: stderr: ""
Feb 24 00:48:33.641: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 24 00:48:33.641: INFO: validating pod update-demo-nautilus-lsqlp
Feb 24 00:48:33.648: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 24 00:48:33.648: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 24 00:48:33.648: INFO: update-demo-nautilus-lsqlp is verified up and running
STEP: using delete to clean up resources
Feb 24 00:48:33.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5743'
Feb 24 00:48:33.759: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 24 00:48:33.760: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 24 00:48:33.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5743'
Feb 24 00:48:33.898: INFO: stderr: "No resources found in kubectl-5743 namespace.\n"
Feb 24 00:48:33.898: INFO: stdout: ""
Feb 24 00:48:33.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5743 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 24 00:48:33.997: INFO: stderr: ""
Feb 24 00:48:33.997: INFO: stdout: "update-demo-nautilus-gb5tf\nupdate-demo-nautilus-lsqlp\n"
Feb 24 00:48:34.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5743'
Feb 24 00:48:35.216: INFO: stderr: "No resources found in kubectl-5743 namespace.\n"
Feb 24 00:48:35.217: INFO: stdout: ""
Feb 24 00:48:35.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5743 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 24 00:48:35.706: INFO: stderr: ""
Feb 24 00:48:35.706: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:48:35.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5743" for this suite.

• [SLOW TEST:40.383 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":280,"completed":155,"skipped":2699,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:48:35.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 00:48:36.815: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 00:48:39.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 00:48:41.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 00:48:43.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102116, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 00:48:46.300: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:48:46.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1812" for this suite.
STEP: Destroying namespace "webhook-1812-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.263 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":156,"skipped":2702,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:48:47.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-299dd302-45d3-4968-b3f9-a00fb2bd1e1c in namespace container-probe-4396
Feb 24 00:48:57.469: INFO: Started pod liveness-299dd302-45d3-4968-b3f9-a00fb2bd1e1c in namespace container-probe-4396
STEP: checking the pod's current state and verifying that restartCount is present
Feb 24 00:48:57.475: INFO: Initial restart count of pod liveness-299dd302-45d3-4968-b3f9-a00fb2bd1e1c is 0
Feb 24 00:49:23.588: INFO: Restart count of pod container-probe-4396/liveness-299dd302-45d3-4968-b3f9-a00fb2bd1e1c is now 1 (26.112975906s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:49:23.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4396" for this suite.

• [SLOW TEST:36.498 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":157,"skipped":2711,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:49:23.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-7392db5a-8c45-4f88-9977-46a80b09c729
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:49:23.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-757" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":158,"skipped":2722,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:49:23.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 24 00:49:23.908: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 24 00:49:23.964: INFO: Waiting for terminating namespaces to be deleted...
Feb 24 00:49:23.967: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 24 00:49:23.997: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 24 00:49:23.997: INFO: 	Container weave ready: true, restart count 1
Feb 24 00:49:23.997: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 00:49:23.997: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 24 00:49:23.997: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 00:49:23.997: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 24 00:49:24.019: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 24 00:49:24.019: INFO: 	Container kube-controller-manager ready: true, restart count 17
Feb 24 00:49:24.019: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 24 00:49:24.019: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 00:49:24.019: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 24 00:49:24.019: INFO: 	Container weave ready: true, restart count 0
Feb 24 00:49:24.019: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 00:49:24.019: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 24 00:49:24.019: INFO: 	Container kube-scheduler ready: true, restart count 23
Feb 24 00:49:24.019: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 24 00:49:24.019: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 24 00:49:24.019: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 24 00:49:24.019: INFO: 	Container etcd ready: true, restart count 1
Feb 24 00:49:24.019: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 24 00:49:24.019: INFO: 	Container coredns ready: true, restart count 0
Feb 24 00:49:24.019: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 24 00:49:24.019: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b2d2aa35-7768-4825-85a9-bed2363e7e5b 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-b2d2aa35-7768-4825-85a9-bed2363e7e5b off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b2d2aa35-7768-4825-85a9-bed2363e7e5b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:49:58.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4370" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:34.775 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":159,"skipped":2738,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:49:58.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 24 00:49:58.795: INFO: Waiting up to 5m0s for pod "pod-a27dd39a-4219-427e-b3d1-652b5cb5e791" in namespace "emptydir-5935" to be "success or failure"
Feb 24 00:49:58.807: INFO: Pod "pod-a27dd39a-4219-427e-b3d1-652b5cb5e791": Phase="Pending", Reason="", readiness=false. Elapsed: 11.173722ms
Feb 24 00:50:00.813: INFO: Pod "pod-a27dd39a-4219-427e-b3d1-652b5cb5e791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017313912s
Feb 24 00:50:02.822: INFO: Pod "pod-a27dd39a-4219-427e-b3d1-652b5cb5e791": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02608109s
Feb 24 00:50:04.826: INFO: Pod "pod-a27dd39a-4219-427e-b3d1-652b5cb5e791": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030185883s
Feb 24 00:50:06.882: INFO: Pod "pod-a27dd39a-4219-427e-b3d1-652b5cb5e791": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08633954s
Feb 24 00:50:08.887: INFO: Pod "pod-a27dd39a-4219-427e-b3d1-652b5cb5e791": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091269507s
Feb 24 00:50:10.892: INFO: Pod "pod-a27dd39a-4219-427e-b3d1-652b5cb5e791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.096767043s
STEP: Saw pod success
Feb 24 00:50:10.892: INFO: Pod "pod-a27dd39a-4219-427e-b3d1-652b5cb5e791" satisfied condition "success or failure"
Feb 24 00:50:10.897: INFO: Trying to get logs from node jerma-node pod pod-a27dd39a-4219-427e-b3d1-652b5cb5e791 container test-container: 
STEP: delete the pod
Feb 24 00:50:10.964: INFO: Waiting for pod pod-a27dd39a-4219-427e-b3d1-652b5cb5e791 to disappear
Feb 24 00:50:10.991: INFO: Pod pod-a27dd39a-4219-427e-b3d1-652b5cb5e791 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:50:10.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5935" for this suite.

• [SLOW TEST:12.396 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":160,"skipped":2750,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:50:11.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 00:50:11.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:50:19.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1056" for this suite.

• [SLOW TEST:8.215 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":161,"skipped":2758,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:50:19.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Feb 24 00:50:19.367: INFO: Waiting up to 5m0s for pod "client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce" in namespace "containers-2863" to be "success or failure"
Feb 24 00:50:19.383: INFO: Pod "client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce": Phase="Pending", Reason="", readiness=false. Elapsed: 16.533473ms
Feb 24 00:50:21.390: INFO: Pod "client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023586298s
Feb 24 00:50:23.958: INFO: Pod "client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591637746s
Feb 24 00:50:26.178: INFO: Pod "client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.811020324s
Feb 24 00:50:28.185: INFO: Pod "client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.818043523s
STEP: Saw pod success
Feb 24 00:50:28.185: INFO: Pod "client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce" satisfied condition "success or failure"
Feb 24 00:50:28.191: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce container test-container: 
STEP: delete the pod
Feb 24 00:50:28.264: INFO: Waiting for pod client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce to disappear
Feb 24 00:50:28.287: INFO: Pod client-containers-2638a187-50ae-45db-aee0-d67cf1c015ce no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:50:28.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2863" for this suite.

• [SLOW TEST:9.122 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":162,"skipped":2811,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:50:28.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 24 00:50:29.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8" in namespace "projected-4724" to be "success or failure"
Feb 24 00:50:29.404: INFO: Pod "downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 279.562071ms
Feb 24 00:50:31.413: INFO: Pod "downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288792592s
Feb 24 00:50:33.549: INFO: Pod "downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424787881s
Feb 24 00:50:35.557: INFO: Pod "downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433215543s
Feb 24 00:50:37.564: INFO: Pod "downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.439388727s
STEP: Saw pod success
Feb 24 00:50:37.564: INFO: Pod "downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8" satisfied condition "success or failure"
Feb 24 00:50:37.566: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8 container client-container: 
STEP: delete the pod
Feb 24 00:50:37.601: INFO: Waiting for pod downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8 to disappear
Feb 24 00:50:37.678: INFO: Pod downwardapi-volume-21768d29-ec18-4591-840b-c252c473a7a8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:50:37.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4724" for this suite.

• [SLOW TEST:9.341 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":163,"skipped":2838,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:50:37.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 24 00:50:37.721: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 24 00:50:37.752: INFO: Waiting for terminating namespaces to be deleted...
Feb 24 00:50:37.755: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 24 00:50:37.761: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 24 00:50:37.761: INFO: 	Container weave ready: true, restart count 1
Feb 24 00:50:37.761: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 00:50:37.761: INFO: pod-logs-websocket-d6917c3f-8e7f-4ae6-9e87-7a56a8b79a1b from pods-1056 started at 2020-02-24 00:50:11 +0000 UTC (1 container statuses recorded)
Feb 24 00:50:37.761: INFO: 	Container main ready: true, restart count 0
Feb 24 00:50:37.761: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 24 00:50:37.761: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 00:50:37.761: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 24 00:50:37.767: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 24 00:50:37.767: INFO: 	Container kube-scheduler ready: true, restart count 23
Feb 24 00:50:37.767: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 24 00:50:37.767: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 24 00:50:37.767: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 24 00:50:37.767: INFO: 	Container etcd ready: true, restart count 1
Feb 24 00:50:37.767: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 24 00:50:37.767: INFO: 	Container coredns ready: true, restart count 0
Feb 24 00:50:37.767: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 24 00:50:37.767: INFO: 	Container coredns ready: true, restart count 0
Feb 24 00:50:37.767: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 24 00:50:37.767: INFO: 	Container kube-controller-manager ready: true, restart count 17
Feb 24 00:50:37.767: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 24 00:50:37.767: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 00:50:37.767: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 24 00:50:37.767: INFO: 	Container weave ready: true, restart count 0
Feb 24 00:50:37.767: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e4930206-bc5c-4775-a148-cb9fd4b1bfee 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-e4930206-bc5c-4775-a148-cb9fd4b1bfee off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e4930206-bc5c-4775-a148-cb9fd4b1bfee
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:56:00.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4543" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:322.974 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":164,"skipped":2839,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:56:00.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 24 00:56:00.802: INFO: Waiting up to 5m0s for pod "downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298" in namespace "downward-api-14" to be "success or failure"
Feb 24 00:56:00.807: INFO: Pod "downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298": Phase="Pending", Reason="", readiness=false. Elapsed: 4.80889ms
Feb 24 00:56:02.815: INFO: Pod "downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012431789s
Feb 24 00:56:04.822: INFO: Pod "downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019856496s
Feb 24 00:56:06.839: INFO: Pod "downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036698259s
Feb 24 00:56:08.845: INFO: Pod "downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042548096s
Feb 24 00:56:10.856: INFO: Pod "downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053647279s
STEP: Saw pod success
Feb 24 00:56:10.856: INFO: Pod "downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298" satisfied condition "success or failure"
Feb 24 00:56:10.919: INFO: Trying to get logs from node jerma-node pod downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298 container dapi-container: 
STEP: delete the pod
Feb 24 00:56:10.997: INFO: Waiting for pod downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298 to disappear
Feb 24 00:56:11.012: INFO: Pod downward-api-726d0b41-b28e-4f0a-bbea-eb68b4f62298 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:56:11.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-14" for this suite.

• [SLOW TEST:10.372 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":165,"skipped":2856,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:56:11.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating secret secrets-9454/secret-test-30e297f9-1097-43d7-b383-03cf20818d03
STEP: Creating a pod to test consume secrets
Feb 24 00:56:11.223: INFO: Waiting up to 5m0s for pod "pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671" in namespace "secrets-9454" to be "success or failure"
Feb 24 00:56:11.253: INFO: Pod "pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671": Phase="Pending", Reason="", readiness=false. Elapsed: 30.018561ms
Feb 24 00:56:13.259: INFO: Pod "pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035775856s
Feb 24 00:56:15.326: INFO: Pod "pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103037358s
Feb 24 00:56:17.337: INFO: Pod "pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113800495s
Feb 24 00:56:19.804: INFO: Pod "pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671": Phase="Pending", Reason="", readiness=false. Elapsed: 8.58093453s
Feb 24 00:56:21.813: INFO: Pod "pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.58980531s
STEP: Saw pod success
Feb 24 00:56:21.813: INFO: Pod "pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671" satisfied condition "success or failure"
Feb 24 00:56:21.820: INFO: Trying to get logs from node jerma-node pod pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671 container env-test: 
STEP: delete the pod
Feb 24 00:56:21.950: INFO: Waiting for pod pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671 to disappear
Feb 24 00:56:21.957: INFO: Pod pod-configmaps-529f048b-1fd0-4a0d-b292-67f529733671 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:56:21.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9454" for this suite.

• [SLOW TEST:10.940 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":166,"skipped":2866,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:56:21.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 24 00:56:22.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa" in namespace "downward-api-2706" to be "success or failure"
Feb 24 00:56:22.306: INFO: Pod "downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.237165ms
Feb 24 00:56:24.313: INFO: Pod "downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01733683s
Feb 24 00:56:26.324: INFO: Pod "downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027786183s
Feb 24 00:56:28.380: INFO: Pod "downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084280585s
Feb 24 00:56:30.387: INFO: Pod "downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091178581s
STEP: Saw pod success
Feb 24 00:56:30.387: INFO: Pod "downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa" satisfied condition "success or failure"
Feb 24 00:56:30.390: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa container client-container: 
STEP: delete the pod
Feb 24 00:56:30.417: INFO: Waiting for pod downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa to disappear
Feb 24 00:56:30.420: INFO: Pod downwardapi-volume-7112eb13-b1a8-461a-b732-50cf3451c7aa no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:56:30.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2706" for this suite.

• [SLOW TEST:8.505 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":167,"skipped":2872,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:56:30.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:56:30.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-681" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":168,"skipped":2880,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:56:30.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 00:56:30.709: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 24 00:56:30.784: INFO: Number of nodes with available pods: 0
Feb 24 00:56:30.784: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:56:32.868: INFO: Number of nodes with available pods: 0
Feb 24 00:56:32.869: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:56:33.801: INFO: Number of nodes with available pods: 0
Feb 24 00:56:33.801: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:56:34.799: INFO: Number of nodes with available pods: 0
Feb 24 00:56:34.799: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:56:35.855: INFO: Number of nodes with available pods: 0
Feb 24 00:56:35.856: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:56:39.205: INFO: Number of nodes with available pods: 0
Feb 24 00:56:39.205: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:56:39.801: INFO: Number of nodes with available pods: 0
Feb 24 00:56:39.801: INFO: Node jerma-node is running more than one daemon pod
Feb 24 00:56:40.794: INFO: Number of nodes with available pods: 1
Feb 24 00:56:40.794: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:56:41.808: INFO: Number of nodes with available pods: 1
Feb 24 00:56:41.808: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:56:42.799: INFO: Number of nodes with available pods: 2
Feb 24 00:56:42.799: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 24 00:56:42.838: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:42.838: INFO: Wrong image for pod: daemon-set-hdbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:44.701: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:44.702: INFO: Wrong image for pod: daemon-set-hdbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:45.710: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:45.711: INFO: Wrong image for pod: daemon-set-hdbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:46.708: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:46.709: INFO: Wrong image for pod: daemon-set-hdbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:47.705: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:47.705: INFO: Wrong image for pod: daemon-set-hdbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:48.706: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:48.706: INFO: Wrong image for pod: daemon-set-hdbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:48.706: INFO: Pod daemon-set-hdbsq is not available
Feb 24 00:56:49.706: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:49.706: INFO: Pod daemon-set-thgc6 is not available
Feb 24 00:56:50.707: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:50.707: INFO: Pod daemon-set-thgc6 is not available
Feb 24 00:56:51.706: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:51.706: INFO: Pod daemon-set-thgc6 is not available
Feb 24 00:56:52.787: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:52.787: INFO: Pod daemon-set-thgc6 is not available
Feb 24 00:56:53.705: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:53.705: INFO: Pod daemon-set-thgc6 is not available
Feb 24 00:56:54.764: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:54.764: INFO: Pod daemon-set-thgc6 is not available
Feb 24 00:56:55.984: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:56.705: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:57.707: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:58.704: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:59.704: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:56:59.704: INFO: Pod daemon-set-5blv9 is not available
Feb 24 00:57:00.704: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:57:00.704: INFO: Pod daemon-set-5blv9 is not available
Feb 24 00:57:01.706: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:57:01.706: INFO: Pod daemon-set-5blv9 is not available
Feb 24 00:57:02.708: INFO: Wrong image for pod: daemon-set-5blv9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 24 00:57:02.708: INFO: Pod daemon-set-5blv9 is not available
Feb 24 00:57:03.708: INFO: Pod daemon-set-rvdsw is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 24 00:57:03.727: INFO: Number of nodes with available pods: 1
Feb 24 00:57:03.727: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:57:04.793: INFO: Number of nodes with available pods: 1
Feb 24 00:57:04.793: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:57:05.757: INFO: Number of nodes with available pods: 1
Feb 24 00:57:05.757: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:57:07.748: INFO: Number of nodes with available pods: 1
Feb 24 00:57:07.748: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:57:08.754: INFO: Number of nodes with available pods: 1
Feb 24 00:57:08.754: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:57:09.747: INFO: Number of nodes with available pods: 1
Feb 24 00:57:09.748: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 24 00:57:10.749: INFO: Number of nodes with available pods: 2
Feb 24 00:57:10.749: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3325, will wait for the garbage collector to delete the pods
Feb 24 00:57:10.854: INFO: Deleting DaemonSet.extensions daemon-set took: 13.134579ms
Feb 24 00:57:11.255: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.129696ms
Feb 24 00:57:21.391: INFO: Number of nodes with available pods: 0
Feb 24 00:57:21.391: INFO: Number of running nodes: 0, number of available pods: 0
Feb 24 00:57:21.396: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3325/daemonsets","resourceVersion":"10334707"},"items":null}

Feb 24 00:57:21.400: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3325/pods","resourceVersion":"10334707"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:57:21.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3325" for this suite.

• [SLOW TEST:50.844 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":169,"skipped":2880,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:57:21.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 00:57:22.294: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 00:57:24.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 00:57:26.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 00:57:28.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 00:57:30.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102642, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 00:57:33.377: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 00:57:43.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9869" for this suite.
STEP: Destroying namespace "webhook-9869-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:22.328 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":170,"skipped":2907,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 00:57:43.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-72b750a0-aaad-4430-83a2-69b10635dbee in namespace container-probe-7684
Feb 24 00:57:53.893: INFO: Started pod liveness-72b750a0-aaad-4430-83a2-69b10635dbee in namespace container-probe-7684
STEP: checking the pod's current state and verifying that restartCount is present
Feb 24 00:57:53.897: INFO: Initial restart count of pod liveness-72b750a0-aaad-4430-83a2-69b10635dbee is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:01:55.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7684" for this suite.

• [SLOW TEST:252.043 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":171,"skipped":2948,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:01:55.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating cluster-info
Feb 24 01:01:55.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 24 01:01:59.549: INFO: stderr: ""
Feb 24 01:01:59.549: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:01:59.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3722" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":280,"completed":172,"skipped":2983,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:01:59.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 24 01:02:10.487: INFO: Successfully updated pod "annotationupdatede41964e-3ac5-42ea-9330-9a30a62b147c"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:02:12.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3465" for this suite.

• [SLOW TEST:13.029 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":173,"skipped":2989,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:02:12.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 01:02:13.277: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 01:02:15.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:02:17.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:02:19.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:02:21.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718102933, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 01:02:24.337: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a validating webhook configuration
Feb 24 01:02:27.453: INFO: Waiting for webhook configuration to be ready...
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:02:27.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6913" for this suite.
STEP: Destroying namespace "webhook-6913-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:15.288 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":174,"skipped":2993,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:02:27.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 24 01:02:27.963: INFO: Waiting up to 5m0s for pod "downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9" in namespace "downward-api-7933" to be "success or failure"
Feb 24 01:02:27.981: INFO: Pod "downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.42735ms
Feb 24 01:02:29.987: INFO: Pod "downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02336469s
Feb 24 01:02:31.994: INFO: Pod "downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030404524s
Feb 24 01:02:34.075: INFO: Pod "downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111800514s
Feb 24 01:02:36.086: INFO: Pod "downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122751056s
Feb 24 01:02:38.094: INFO: Pod "downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131061374s
STEP: Saw pod success
Feb 24 01:02:38.095: INFO: Pod "downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9" satisfied condition "success or failure"
Feb 24 01:02:38.099: INFO: Trying to get logs from node jerma-node pod downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9 container dapi-container: 
STEP: delete the pod
Feb 24 01:02:38.145: INFO: Waiting for pod downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9 to disappear
Feb 24 01:02:38.246: INFO: Pod downward-api-fbbd4280-861b-438a-b8cb-999d27bb28d9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:02:38.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7933" for this suite.

• [SLOW TEST:10.404 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":175,"skipped":3007,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:02:38.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 24 01:02:38.517: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29" in namespace "projected-5301" to be "success or failure"
Feb 24 01:02:38.579: INFO: Pod "downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29": Phase="Pending", Reason="", readiness=false. Elapsed: 61.613466ms
Feb 24 01:02:40.595: INFO: Pod "downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077628297s
Feb 24 01:02:42.608: INFO: Pod "downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090101421s
Feb 24 01:02:44.619: INFO: Pod "downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101275843s
Feb 24 01:02:46.627: INFO: Pod "downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108893113s
STEP: Saw pod success
Feb 24 01:02:46.627: INFO: Pod "downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29" satisfied condition "success or failure"
Feb 24 01:02:46.630: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29 container client-container: 
STEP: delete the pod
Feb 24 01:02:47.157: INFO: Waiting for pod downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29 to disappear
Feb 24 01:02:47.264: INFO: Pod downwardapi-volume-11dad55d-e9fe-4f3e-9075-67bab85ffe29 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:02:47.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5301" for this suite.

• [SLOW TEST:8.998 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":176,"skipped":3026,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:02:47.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 24 01:02:47.427: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:02:59.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8838" for this suite.

• [SLOW TEST:12.339 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":177,"skipped":3034,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:02:59.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:03:10.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2650" for this suite.

• [SLOW TEST:11.275 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":178,"skipped":3062,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:03:10.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service nodeport-service with the type=NodePort in namespace services-7133
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7133
STEP: creating replication controller externalsvc in namespace services-7133
I0224 01:03:11.169710      10 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7133, replica count: 2
I0224 01:03:14.220562      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:03:17.221117      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:03:20.221754      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:03:23.222419      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:03:26.223210      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb 24 01:03:26.347: INFO: Creating new exec pod
Feb 24 01:03:36.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7133 execpod8hjgg -- /bin/sh -x -c nslookup nodeport-service'
Feb 24 01:03:36.903: INFO: stderr: "I0224 01:03:36.637663    3791 log.go:172] (0xc000976c60) (0xc000b583c0) Create stream\nI0224 01:03:36.637979    3791 log.go:172] (0xc000976c60) (0xc000b583c0) Stream added, broadcasting: 1\nI0224 01:03:36.662837    3791 log.go:172] (0xc000976c60) Reply frame received for 1\nI0224 01:03:36.662975    3791 log.go:172] (0xc000976c60) (0xc0008ca000) Create stream\nI0224 01:03:36.662996    3791 log.go:172] (0xc000976c60) (0xc0008ca000) Stream added, broadcasting: 3\nI0224 01:03:36.664770    3791 log.go:172] (0xc000976c60) Reply frame received for 3\nI0224 01:03:36.664907    3791 log.go:172] (0xc000976c60) (0xc000b58000) Create stream\nI0224 01:03:36.664920    3791 log.go:172] (0xc000976c60) (0xc000b58000) Stream added, broadcasting: 5\nI0224 01:03:36.666728    3791 log.go:172] (0xc000976c60) Reply frame received for 5\nI0224 01:03:36.772392    3791 log.go:172] (0xc000976c60) Data frame received for 5\nI0224 01:03:36.772484    3791 log.go:172] (0xc000b58000) (5) Data frame handling\nI0224 01:03:36.772517    3791 log.go:172] (0xc000b58000) (5) Data frame sent\n+ nslookup nodeport-service\nI0224 01:03:36.793001    3791 log.go:172] (0xc000976c60) Data frame received for 3\nI0224 01:03:36.793047    3791 log.go:172] (0xc0008ca000) (3) Data frame handling\nI0224 01:03:36.793067    3791 log.go:172] (0xc0008ca000) (3) Data frame sent\nI0224 01:03:36.794293    3791 log.go:172] (0xc000976c60) Data frame received for 3\nI0224 01:03:36.794310    3791 log.go:172] (0xc0008ca000) (3) Data frame handling\nI0224 01:03:36.794324    3791 log.go:172] (0xc0008ca000) (3) Data frame sent\nI0224 01:03:36.893711    3791 log.go:172] (0xc000976c60) Data frame received for 1\nI0224 01:03:36.894121    3791 log.go:172] (0xc000b583c0) (1) Data frame handling\nI0224 01:03:36.894202    3791 log.go:172] (0xc000b583c0) (1) Data frame sent\nI0224 01:03:36.894417    3791 log.go:172] (0xc000976c60) (0xc000b583c0) Stream removed, broadcasting: 1\nI0224 01:03:36.894536    3791 log.go:172] (0xc000976c60) (0xc0008ca000) Stream removed, broadcasting: 3\nI0224 01:03:36.894595    3791 log.go:172] (0xc000976c60) (0xc000b58000) Stream removed, broadcasting: 5\nI0224 01:03:36.894681    3791 log.go:172] (0xc000976c60) Go away received\nI0224 01:03:36.895082    3791 log.go:172] (0xc000976c60) (0xc000b583c0) Stream removed, broadcasting: 1\nI0224 01:03:36.895096    3791 log.go:172] (0xc000976c60) (0xc0008ca000) Stream removed, broadcasting: 3\nI0224 01:03:36.895105    3791 log.go:172] (0xc000976c60) (0xc000b58000) Stream removed, broadcasting: 5\n"
Feb 24 01:03:36.903: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7133.svc.cluster.local\tcanonical name = externalsvc.services-7133.svc.cluster.local.\nName:\texternalsvc.services-7133.svc.cluster.local\nAddress: 10.96.1.39\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7133, will wait for the garbage collector to delete the pods
Feb 24 01:03:36.966: INFO: Deleting ReplicationController externalsvc took: 6.682551ms
Feb 24 01:03:37.266: INFO: Terminating ReplicationController externalsvc pods took: 300.821205ms
Feb 24 01:03:46.064: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:03:46.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7133" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:35.193 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":179,"skipped":3070,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:03:46.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-lfx9
STEP: Creating a pod to test atomic-volume-subpath
Feb 24 01:03:46.205: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lfx9" in namespace "subpath-4926" to be "success or failure"
Feb 24 01:03:46.213: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.05608ms
Feb 24 01:03:48.219: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013759763s
Feb 24 01:03:50.229: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023267832s
Feb 24 01:03:52.234: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028447886s
Feb 24 01:03:54.241: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035352334s
Feb 24 01:03:56.249: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Running", Reason="", readiness=true. Elapsed: 10.042908809s
Feb 24 01:03:58.294: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Running", Reason="", readiness=true. Elapsed: 12.088641655s
Feb 24 01:04:01.640: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Running", Reason="", readiness=true. Elapsed: 15.434128007s
Feb 24 01:04:03.648: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Running", Reason="", readiness=true. Elapsed: 17.442097066s
Feb 24 01:04:05.656: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Running", Reason="", readiness=true. Elapsed: 19.449892617s
Feb 24 01:04:07.663: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Running", Reason="", readiness=true. Elapsed: 21.45768558s
Feb 24 01:04:09.670: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Running", Reason="", readiness=true. Elapsed: 23.464561676s
Feb 24 01:04:11.701: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Running", Reason="", readiness=true. Elapsed: 25.495633394s
Feb 24 01:04:13.709: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Running", Reason="", readiness=true. Elapsed: 27.503808159s
Feb 24 01:04:15.719: INFO: Pod "pod-subpath-test-configmap-lfx9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.513025291s
STEP: Saw pod success
Feb 24 01:04:15.719: INFO: Pod "pod-subpath-test-configmap-lfx9" satisfied condition "success or failure"
Feb 24 01:04:15.767: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-lfx9 container test-container-subpath-configmap-lfx9: 
STEP: delete the pod
Feb 24 01:04:15.842: INFO: Waiting for pod pod-subpath-test-configmap-lfx9 to disappear
Feb 24 01:04:15.925: INFO: Pod pod-subpath-test-configmap-lfx9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lfx9
Feb 24 01:04:15.925: INFO: Deleting pod "pod-subpath-test-configmap-lfx9" in namespace "subpath-4926"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:04:15.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4926" for this suite.

• [SLOW TEST:29.863 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":180,"skipped":3074,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:04:15.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:04:22.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2065" for this suite.
STEP: Destroying namespace "nsdeletetest-4265" for this suite.
Feb 24 01:04:22.579: INFO: Namespace nsdeletetest-4265 was already deleted
STEP: Destroying namespace "nsdeletetest-9672" for this suite.

• [SLOW TEST:6.633 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":181,"skipped":3075,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:04:22.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-n9fl
STEP: Creating a pod to test atomic-volume-subpath
Feb 24 01:04:22.794: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n9fl" in namespace "subpath-6406" to be "success or failure"
Feb 24 01:04:22.916: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Pending", Reason="", readiness=false. Elapsed: 121.998468ms
Feb 24 01:04:24.923: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128910772s
Feb 24 01:04:26.931: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13653715s
Feb 24 01:04:28.939: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144761328s
Feb 24 01:04:30.947: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 8.152615275s
Feb 24 01:04:32.954: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 10.15964674s
Feb 24 01:04:34.962: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 12.167852665s
Feb 24 01:04:36.970: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 14.175468954s
Feb 24 01:04:38.975: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 16.18084957s
Feb 24 01:04:40.981: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 18.186656072s
Feb 24 01:04:42.993: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 20.198874838s
Feb 24 01:04:44.999: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 22.204711974s
Feb 24 01:04:47.007: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 24.212761533s
Feb 24 01:04:49.029: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Running", Reason="", readiness=true. Elapsed: 26.234429461s
Feb 24 01:04:51.070: INFO: Pod "pod-subpath-test-configmap-n9fl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.276249748s
STEP: Saw pod success
Feb 24 01:04:51.071: INFO: Pod "pod-subpath-test-configmap-n9fl" satisfied condition "success or failure"
Feb 24 01:04:51.074: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-n9fl container test-container-subpath-configmap-n9fl: 
STEP: delete the pod
Feb 24 01:04:51.103: INFO: Waiting for pod pod-subpath-test-configmap-n9fl to disappear
Feb 24 01:04:51.113: INFO: Pod pod-subpath-test-configmap-n9fl no longer exists
STEP: Deleting pod pod-subpath-test-configmap-n9fl
Feb 24 01:04:51.113: INFO: Deleting pod "pod-subpath-test-configmap-n9fl" in namespace "subpath-6406"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:04:51.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6406" for this suite.

• [SLOW TEST:28.544 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":182,"skipped":3082,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:04:51.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 24 01:05:02.062: INFO: Successfully updated pod "pod-update-activedeadlineseconds-441afd28-9d65-4d03-bfba-dca2123258d9"
Feb 24 01:05:02.062: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-441afd28-9d65-4d03-bfba-dca2123258d9" in namespace "pods-3352" to be "terminated due to deadline exceeded"
Feb 24 01:05:02.070: INFO: Pod "pod-update-activedeadlineseconds-441afd28-9d65-4d03-bfba-dca2123258d9": Phase="Running", Reason="", readiness=true. Elapsed: 7.244691ms
Feb 24 01:05:04.077: INFO: Pod "pod-update-activedeadlineseconds-441afd28-9d65-4d03-bfba-dca2123258d9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014331307s
Feb 24 01:05:04.077: INFO: Pod "pod-update-activedeadlineseconds-441afd28-9d65-4d03-bfba-dca2123258d9" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:05:04.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3352" for this suite.

• [SLOW TEST:12.957 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":183,"skipped":3087,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:05:04.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 24 01:05:04.264: INFO: Waiting up to 5m0s for pod "pod-60662670-c2fa-495a-aa99-33a10b8444b5" in namespace "emptydir-2468" to be "success or failure"
Feb 24 01:05:04.278: INFO: Pod "pod-60662670-c2fa-495a-aa99-33a10b8444b5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.906955ms
Feb 24 01:05:06.285: INFO: Pod "pod-60662670-c2fa-495a-aa99-33a10b8444b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020870728s
Feb 24 01:05:08.295: INFO: Pod "pod-60662670-c2fa-495a-aa99-33a10b8444b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030957088s
Feb 24 01:05:10.302: INFO: Pod "pod-60662670-c2fa-495a-aa99-33a10b8444b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037737482s
Feb 24 01:05:12.307: INFO: Pod "pod-60662670-c2fa-495a-aa99-33a10b8444b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043326479s
STEP: Saw pod success
Feb 24 01:05:12.308: INFO: Pod "pod-60662670-c2fa-495a-aa99-33a10b8444b5" satisfied condition "success or failure"
Feb 24 01:05:12.313: INFO: Trying to get logs from node jerma-node pod pod-60662670-c2fa-495a-aa99-33a10b8444b5 container test-container: 
STEP: delete the pod
Feb 24 01:05:12.350: INFO: Waiting for pod pod-60662670-c2fa-495a-aa99-33a10b8444b5 to disappear
Feb 24 01:05:12.360: INFO: Pod pod-60662670-c2fa-495a-aa99-33a10b8444b5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:05:12.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2468" for this suite.

• [SLOW TEST:8.282 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":184,"skipped":3101,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:05:12.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-d54cb569-dcc7-435c-b056-e016ac94457a in namespace container-probe-9964
Feb 24 01:05:20.552: INFO: Started pod liveness-d54cb569-dcc7-435c-b056-e016ac94457a in namespace container-probe-9964
STEP: checking the pod's current state and verifying that restartCount is present
Feb 24 01:05:20.559: INFO: Initial restart count of pod liveness-d54cb569-dcc7-435c-b056-e016ac94457a is 0
Feb 24 01:05:36.640: INFO: Restart count of pod container-probe-9964/liveness-d54cb569-dcc7-435c-b056-e016ac94457a is now 1 (16.080603209s elapsed)
Feb 24 01:05:54.722: INFO: Restart count of pod container-probe-9964/liveness-d54cb569-dcc7-435c-b056-e016ac94457a is now 2 (34.163261859s elapsed)
Feb 24 01:06:15.347: INFO: Restart count of pod container-probe-9964/liveness-d54cb569-dcc7-435c-b056-e016ac94457a is now 3 (54.788394575s elapsed)
Feb 24 01:06:35.469: INFO: Restart count of pod container-probe-9964/liveness-d54cb569-dcc7-435c-b056-e016ac94457a is now 4 (1m14.910455051s elapsed)
Feb 24 01:07:40.276: INFO: Restart count of pod container-probe-9964/liveness-d54cb569-dcc7-435c-b056-e016ac94457a is now 5 (2m19.717393092s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:07:40.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9964" for this suite.

• [SLOW TEST:147.955 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":185,"skipped":3128,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:07:40.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb 24 01:07:40.433: INFO: >>> kubeConfig: /root/.kube/config
Feb 24 01:07:44.120: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:07:57.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3126" for this suite.

• [SLOW TEST:17.165 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":186,"skipped":3130,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:07:57.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:07:57.656: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:07:58.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-951" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":280,"completed":187,"skipped":3137,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:07:58.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 01:07:59.495: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 01:08:01.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:08:03.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:08:05.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:08:07.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103279, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 01:08:10.641: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:08:10.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7312" for this suite.
STEP: Destroying namespace "webhook-7312-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.161 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":188,"skipped":3155,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:08:10.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:08:10.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:08:21.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2667" for this suite.

• [SLOW TEST:10.416 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":189,"skipped":3185,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:08:21.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 24 01:08:21.729: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6967 /api/v1/namespaces/watch-6967/configmaps/e2e-watch-test-resource-version d56c248d-1d55-4987-b719-aa6c83057334 10336940 0 2020-02-24 01:08:21 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 24 01:08:21.730: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6967 /api/v1/namespaces/watch-6967/configmaps/e2e-watch-test-resource-version d56c248d-1d55-4987-b719-aa6c83057334 10336942 0 2020-02-24 01:08:21 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:08:21.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6967" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":190,"skipped":3204,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:08:21.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test hostPath mode
Feb 24 01:08:21.855: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4168" to be "success or failure"
Feb 24 01:08:21.864: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.933435ms
Feb 24 01:08:23.903: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048456584s
Feb 24 01:08:25.910: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05456919s
Feb 24 01:08:27.957: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101675961s
Feb 24 01:08:29.962: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106981358s
Feb 24 01:08:31.969: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1140227s
STEP: Saw pod success
Feb 24 01:08:31.969: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 24 01:08:32.019: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 24 01:08:32.148: INFO: Waiting for pod pod-host-path-test to disappear
Feb 24 01:08:32.168: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:08:32.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4168" for this suite.

• [SLOW TEST:10.438 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":191,"skipped":3211,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:08:32.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's command
Feb 24 01:08:32.319: INFO: Waiting up to 5m0s for pod "var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa" in namespace "var-expansion-6445" to be "success or failure"
Feb 24 01:08:32.353: INFO: Pod "var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa": Phase="Pending", Reason="", readiness=false. Elapsed: 33.870451ms
Feb 24 01:08:34.363: INFO: Pod "var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043805276s
Feb 24 01:08:36.367: INFO: Pod "var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048221747s
Feb 24 01:08:38.373: INFO: Pod "var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054081417s
Feb 24 01:08:40.380: INFO: Pod "var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061041085s
Feb 24 01:08:42.388: INFO: Pod "var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068452066s
STEP: Saw pod success
Feb 24 01:08:42.388: INFO: Pod "var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa" satisfied condition "success or failure"
Feb 24 01:08:42.392: INFO: Trying to get logs from node jerma-node pod var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa container dapi-container: 
STEP: delete the pod
Feb 24 01:08:42.945: INFO: Waiting for pod var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa to disappear
Feb 24 01:08:42.962: INFO: Pod var-expansion-0afc5a7a-541b-4c92-b9b5-c3c36b4023fa no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:08:42.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6445" for this suite.

• [SLOW TEST:10.797 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":192,"skipped":3213,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:08:42.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 24 01:08:43.277: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 24 01:08:43.315: INFO: Waiting for terminating namespaces to be deleted...
Feb 24 01:08:43.375: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 24 01:08:43.384: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 24 01:08:43.384: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 01:08:43.384: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 24 01:08:43.384: INFO: 	Container weave ready: true, restart count 1
Feb 24 01:08:43.384: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 01:08:43.384: INFO: pod-exec-websocket-a6a54bdd-2b78-4cd5-9ce9-3b5407b2e93c from pods-2667 started at 2020-02-24 01:08:11 +0000 UTC (1 container statuses recorded)
Feb 24 01:08:43.384: INFO: 	Container main ready: true, restart count 0
Feb 24 01:08:43.384: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 24 01:08:43.412: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 24 01:08:43.412: INFO: 	Container kube-scheduler ready: true, restart count 23
Feb 24 01:08:43.412: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 24 01:08:43.412: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 24 01:08:43.412: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 24 01:08:43.412: INFO: 	Container etcd ready: true, restart count 1
Feb 24 01:08:43.412: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 24 01:08:43.412: INFO: 	Container coredns ready: true, restart count 0
Feb 24 01:08:43.412: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 24 01:08:43.412: INFO: 	Container coredns ready: true, restart count 0
Feb 24 01:08:43.412: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 24 01:08:43.412: INFO: 	Container weave ready: true, restart count 0
Feb 24 01:08:43.412: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 01:08:43.412: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 24 01:08:43.412: INFO: 	Container kube-controller-manager ready: true, restart count 17
Feb 24 01:08:43.412: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 24 01:08:43.412: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Feb 24 01:08:43.611: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 24 01:08:43.611: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 24 01:08:43.611: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 24 01:08:43.611: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Feb 24 01:08:43.611: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Feb 24 01:08:43.611: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 24 01:08:43.611: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Feb 24 01:08:43.611: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 24 01:08:43.611: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Feb 24 01:08:43.611: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
Feb 24 01:08:43.611: INFO: Pod pod-exec-websocket-a6a54bdd-2b78-4cd5-9ce9-3b5407b2e93c requesting resource cpu=0m on Node jerma-node
STEP: Starting Pods to consume most of the cluster CPU.
Feb 24 01:08:43.611: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Feb 24 01:08:43.622: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbdc610f-719e-4a68-aaf3-766ab3adc81e.15f6314c7d65f607], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5083/filler-pod-bbdc610f-719e-4a68-aaf3-766ab3adc81e to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbdc610f-719e-4a68-aaf3-766ab3adc81e.15f6314dbafe4114], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbdc610f-719e-4a68-aaf3-766ab3adc81e.15f6314e973531ee], Reason = [Created], Message = [Created container filler-pod-bbdc610f-719e-4a68-aaf3-766ab3adc81e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbdc610f-719e-4a68-aaf3-766ab3adc81e.15f6314eba5a5b58], Reason = [Started], Message = [Started container filler-pod-bbdc610f-719e-4a68-aaf3-766ab3adc81e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fbf9fc14-c304-4bda-904e-5f9fe16531d1.15f6314c7cab9c95], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5083/filler-pod-fbf9fc14-c304-4bda-904e-5f9fe16531d1 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fbf9fc14-c304-4bda-904e-5f9fe16531d1.15f6314d86a8881f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fbf9fc14-c304-4bda-904e-5f9fe16531d1.15f6314e3fff216a], Reason = [Created], Message = [Created container filler-pod-fbf9fc14-c304-4bda-904e-5f9fe16531d1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fbf9fc14-c304-4bda-904e-5f9fe16531d1.15f6314e61378c26], Reason = [Started], Message = [Started container filler-pod-fbf9fc14-c304-4bda-904e-5f9fe16531d1]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f6314eda195946], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f6314edb80c6fe], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:08:54.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5083" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:11.986 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":280,"completed":193,"skipped":3235,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:08:54.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 24 01:08:55.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7641'
Feb 24 01:08:55.125: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 24 01:08:55.126: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb 24 01:08:55.170: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-jv8vf]
Feb 24 01:08:55.170: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-jv8vf" in namespace "kubectl-7641" to be "running and ready"
Feb 24 01:08:55.277: INFO: Pod "e2e-test-httpd-rc-jv8vf": Phase="Pending", Reason="", readiness=false. Elapsed: 106.517387ms
Feb 24 01:08:57.287: INFO: Pod "e2e-test-httpd-rc-jv8vf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116627691s
Feb 24 01:08:59.998: INFO: Pod "e2e-test-httpd-rc-jv8vf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.827926552s
Feb 24 01:09:02.144: INFO: Pod "e2e-test-httpd-rc-jv8vf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.974213297s
Feb 24 01:09:04.239: INFO: Pod "e2e-test-httpd-rc-jv8vf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.068548432s
Feb 24 01:09:06.585: INFO: Pod "e2e-test-httpd-rc-jv8vf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.415449686s
Feb 24 01:09:08.630: INFO: Pod "e2e-test-httpd-rc-jv8vf": Phase="Running", Reason="", readiness=true. Elapsed: 13.46043269s
Feb 24 01:09:08.631: INFO: Pod "e2e-test-httpd-rc-jv8vf" satisfied condition "running and ready"
Feb 24 01:09:08.631: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-jv8vf]
Feb 24 01:09:08.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7641'
Feb 24 01:09:08.794: INFO: stderr: ""
Feb 24 01:09:08.795: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.32.0.5. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.32.0.5. Set the 'ServerName' directive globally to suppress this message\n[Mon Feb 24 01:09:04.831041 2020] [mpm_event:notice] [pid 1:tid 140334020963176] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Feb 24 01:09:04.831149 2020] [core:notice] [pid 1:tid 140334020963176] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639
Feb 24 01:09:08.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7641'
Feb 24 01:09:08.920: INFO: stderr: ""
Feb 24 01:09:08.920: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:09:08.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7641" for this suite.

• [SLOW TEST:14.002 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":280,"completed":194,"skipped":3242,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:09:08.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:09:09.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 24 01:09:14.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2371 create -f -'
Feb 24 01:09:16.778: INFO: stderr: ""
Feb 24 01:09:16.778: INFO: stdout: "e2e-test-crd-publish-openapi-4360-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 24 01:09:16.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2371 delete e2e-test-crd-publish-openapi-4360-crds test-cr'
Feb 24 01:09:16.908: INFO: stderr: ""
Feb 24 01:09:16.908: INFO: stdout: "e2e-test-crd-publish-openapi-4360-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Feb 24 01:09:16.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2371 apply -f -'
Feb 24 01:09:17.208: INFO: stderr: ""
Feb 24 01:09:17.209: INFO: stdout: "e2e-test-crd-publish-openapi-4360-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 24 01:09:17.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2371 delete e2e-test-crd-publish-openapi-4360-crds test-cr'
Feb 24 01:09:17.329: INFO: stderr: ""
Feb 24 01:09:17.329: INFO: stdout: "e2e-test-crd-publish-openapi-4360-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Feb 24 01:09:17.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4360-crds'
Feb 24 01:09:17.641: INFO: stderr: ""
Feb 24 01:09:17.641: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4360-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:09:21.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2371" for this suite.

• [SLOW TEST:12.278 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":195,"skipped":3245,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:09:21.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 24 01:09:21.403: INFO: Waiting up to 5m0s for pod "pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167" in namespace "emptydir-5567" to be "success or failure"
Feb 24 01:09:21.414: INFO: Pod "pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167": Phase="Pending", Reason="", readiness=false. Elapsed: 11.371009ms
Feb 24 01:09:23.422: INFO: Pod "pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019262076s
Feb 24 01:09:25.431: INFO: Pod "pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027866246s
Feb 24 01:09:27.440: INFO: Pod "pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036755065s
Feb 24 01:09:29.448: INFO: Pod "pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045366932s
STEP: Saw pod success
Feb 24 01:09:29.449: INFO: Pod "pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167" satisfied condition "success or failure"
Feb 24 01:09:29.453: INFO: Trying to get logs from node jerma-node pod pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167 container test-container: 
STEP: delete the pod
Feb 24 01:09:29.502: INFO: Waiting for pod pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167 to disappear
Feb 24 01:09:29.593: INFO: Pod pod-40a39111-0fa9-4e5f-bd9f-b20ac7caa167 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:09:29.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5567" for this suite.

• [SLOW TEST:8.354 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":196,"skipped":3247,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:09:29.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:09:29.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-5083
I0224 01:09:29.975652      10 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5083, replica count: 1
I0224 01:09:31.027617      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:09:32.028633      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:09:33.029172      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:09:34.029791      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:09:35.030810      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:09:36.031388      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:09:37.031876      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:09:38.032612      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 24 01:09:38.160: INFO: Created: latency-svc-ntcmt
Feb 24 01:09:38.192: INFO: Got endpoints: latency-svc-ntcmt [59.392616ms]
Feb 24 01:09:38.233: INFO: Created: latency-svc-kjt4s
Feb 24 01:09:38.313: INFO: Created: latency-svc-hx7gb
Feb 24 01:09:38.313: INFO: Got endpoints: latency-svc-kjt4s [120.363975ms]
Feb 24 01:09:38.330: INFO: Got endpoints: latency-svc-hx7gb [135.755676ms]
Feb 24 01:09:38.352: INFO: Created: latency-svc-p627l
Feb 24 01:09:38.356: INFO: Got endpoints: latency-svc-p627l [161.904348ms]
Feb 24 01:09:38.382: INFO: Created: latency-svc-dhrkn
Feb 24 01:09:38.391: INFO: Got endpoints: latency-svc-dhrkn [197.809066ms]
Feb 24 01:09:38.504: INFO: Created: latency-svc-qw86c
Feb 24 01:09:38.582: INFO: Got endpoints: latency-svc-qw86c [387.594886ms]
Feb 24 01:09:38.591: INFO: Created: latency-svc-klbs5
Feb 24 01:09:38.694: INFO: Got endpoints: latency-svc-klbs5 [500.700381ms]
Feb 24 01:09:38.789: INFO: Created: latency-svc-6vjh2
Feb 24 01:09:38.980: INFO: Got endpoints: latency-svc-6vjh2 [786.43222ms]
Feb 24 01:09:39.012: INFO: Created: latency-svc-2sgz4
Feb 24 01:09:39.038: INFO: Got endpoints: latency-svc-2sgz4 [843.9741ms]
Feb 24 01:09:39.217: INFO: Created: latency-svc-47789
Feb 24 01:09:39.256: INFO: Got endpoints: latency-svc-47789 [1.062093438s]
Feb 24 01:09:39.257: INFO: Created: latency-svc-b2l9h
Feb 24 01:09:39.273: INFO: Got endpoints: latency-svc-b2l9h [1.078977632s]
Feb 24 01:09:39.474: INFO: Created: latency-svc-2tlvp
Feb 24 01:09:39.482: INFO: Got endpoints: latency-svc-2tlvp [1.288202616s]
Feb 24 01:09:39.532: INFO: Created: latency-svc-r52b7
Feb 24 01:09:39.549: INFO: Got endpoints: latency-svc-r52b7 [1.354357562s]
Feb 24 01:09:39.571: INFO: Created: latency-svc-s2n8x
Feb 24 01:09:39.735: INFO: Created: latency-svc-46n9p
Feb 24 01:09:39.740: INFO: Got endpoints: latency-svc-s2n8x [1.546592397s]
Feb 24 01:09:39.746: INFO: Got endpoints: latency-svc-46n9p [1.55221671s]
Feb 24 01:09:39.894: INFO: Created: latency-svc-9tfm9
Feb 24 01:09:39.936: INFO: Got endpoints: latency-svc-9tfm9 [1.741885009s]
Feb 24 01:09:39.940: INFO: Created: latency-svc-kbhjn
Feb 24 01:09:39.948: INFO: Got endpoints: latency-svc-kbhjn [1.634325751s]
Feb 24 01:09:39.974: INFO: Created: latency-svc-hlv9d
Feb 24 01:09:39.977: INFO: Got endpoints: latency-svc-hlv9d [1.647366221s]
Feb 24 01:09:40.043: INFO: Created: latency-svc-wznm2
Feb 24 01:09:40.052: INFO: Got endpoints: latency-svc-wznm2 [1.695766061s]
Feb 24 01:09:40.078: INFO: Created: latency-svc-lp485
Feb 24 01:09:40.087: INFO: Got endpoints: latency-svc-lp485 [1.696412627s]
Feb 24 01:09:40.120: INFO: Created: latency-svc-hs2mz
Feb 24 01:09:40.125: INFO: Got endpoints: latency-svc-hs2mz [1.542688841s]
Feb 24 01:09:40.182: INFO: Created: latency-svc-l9r5h
Feb 24 01:09:40.203: INFO: Got endpoints: latency-svc-l9r5h [1.5090712s]
Feb 24 01:09:40.223: INFO: Created: latency-svc-xlkcv
Feb 24 01:09:40.232: INFO: Got endpoints: latency-svc-xlkcv [1.251452452s]
Feb 24 01:09:40.258: INFO: Created: latency-svc-2mc6q
Feb 24 01:09:40.355: INFO: Got endpoints: latency-svc-2mc6q [1.31647614s]
Feb 24 01:09:40.381: INFO: Created: latency-svc-j6tw7
Feb 24 01:09:40.396: INFO: Got endpoints: latency-svc-j6tw7 [1.139386374s]
Feb 24 01:09:40.433: INFO: Created: latency-svc-hsjzb
Feb 24 01:09:40.443: INFO: Got endpoints: latency-svc-hsjzb [1.169613358s]
Feb 24 01:09:40.539: INFO: Created: latency-svc-f9b7f
Feb 24 01:09:40.555: INFO: Got endpoints: latency-svc-f9b7f [1.073067494s]
Feb 24 01:09:40.588: INFO: Created: latency-svc-qtksf
Feb 24 01:09:40.591: INFO: Got endpoints: latency-svc-qtksf [1.041817724s]
Feb 24 01:09:40.653: INFO: Created: latency-svc-pfmk7
Feb 24 01:09:40.658: INFO: Got endpoints: latency-svc-pfmk7 [917.470511ms]
Feb 24 01:09:40.694: INFO: Created: latency-svc-cc5bw
Feb 24 01:09:40.719: INFO: Got endpoints: latency-svc-cc5bw [972.098032ms]
Feb 24 01:09:40.728: INFO: Created: latency-svc-scrzr
Feb 24 01:09:40.742: INFO: Got endpoints: latency-svc-scrzr [805.796377ms]
Feb 24 01:09:40.811: INFO: Created: latency-svc-8n47q
Feb 24 01:09:40.841: INFO: Got endpoints: latency-svc-8n47q [893.224275ms]
Feb 24 01:09:40.893: INFO: Created: latency-svc-qmxnq
Feb 24 01:09:40.906: INFO: Got endpoints: latency-svc-qmxnq [928.409271ms]
Feb 24 01:09:40.995: INFO: Created: latency-svc-64xk6
Feb 24 01:09:41.010: INFO: Got endpoints: latency-svc-64xk6 [958.177934ms]
Feb 24 01:09:41.036: INFO: Created: latency-svc-7qj6p
Feb 24 01:09:41.042: INFO: Got endpoints: latency-svc-7qj6p [954.774414ms]
Feb 24 01:09:41.073: INFO: Created: latency-svc-qpfvv
Feb 24 01:09:41.083: INFO: Got endpoints: latency-svc-qpfvv [958.137921ms]
Feb 24 01:09:41.194: INFO: Created: latency-svc-fq8z6
Feb 24 01:09:41.229: INFO: Got endpoints: latency-svc-fq8z6 [1.025773167s]
Feb 24 01:09:41.230: INFO: Created: latency-svc-c8d7d
Feb 24 01:09:41.259: INFO: Got endpoints: latency-svc-c8d7d [1.02638214s]
Feb 24 01:09:41.415: INFO: Created: latency-svc-q6td8
Feb 24 01:09:41.423: INFO: Got endpoints: latency-svc-q6td8 [1.067674975s]
Feb 24 01:09:41.582: INFO: Created: latency-svc-sjc92
Feb 24 01:09:41.582: INFO: Got endpoints: latency-svc-sjc92 [1.18588464s]
Feb 24 01:09:41.641: INFO: Created: latency-svc-gmv2f
Feb 24 01:09:41.649: INFO: Got endpoints: latency-svc-gmv2f [1.205841457s]
Feb 24 01:09:41.798: INFO: Created: latency-svc-rsp9q
Feb 24 01:09:41.955: INFO: Got endpoints: latency-svc-rsp9q [1.399098407s]
Feb 24 01:09:41.961: INFO: Created: latency-svc-tv82m
Feb 24 01:09:41.972: INFO: Got endpoints: latency-svc-tv82m [1.38099398s]
Feb 24 01:09:41.994: INFO: Created: latency-svc-225cl
Feb 24 01:09:42.029: INFO: Got endpoints: latency-svc-225cl [1.370690277s]
Feb 24 01:09:42.096: INFO: Created: latency-svc-gnhhq
Feb 24 01:09:42.127: INFO: Created: latency-svc-l42db
Feb 24 01:09:42.127: INFO: Got endpoints: latency-svc-gnhhq [1.408028903s]
Feb 24 01:09:42.149: INFO: Got endpoints: latency-svc-l42db [1.406438688s]
Feb 24 01:09:42.152: INFO: Created: latency-svc-76ml7
Feb 24 01:09:42.154: INFO: Got endpoints: latency-svc-76ml7 [1.312392683s]
Feb 24 01:09:42.191: INFO: Created: latency-svc-hspv9
Feb 24 01:09:42.251: INFO: Created: latency-svc-kpvvm
Feb 24 01:09:42.251: INFO: Got endpoints: latency-svc-hspv9 [1.345272943s]
Feb 24 01:09:42.257: INFO: Got endpoints: latency-svc-kpvvm [1.246194978s]
Feb 24 01:09:42.299: INFO: Created: latency-svc-dq5hw
Feb 24 01:09:42.343: INFO: Got endpoints: latency-svc-dq5hw [1.300116107s]
Feb 24 01:09:42.461: INFO: Created: latency-svc-pdszt
Feb 24 01:09:42.480: INFO: Got endpoints: latency-svc-pdszt [1.396191854s]
Feb 24 01:09:42.524: INFO: Created: latency-svc-wlnbm
Feb 24 01:09:42.532: INFO: Got endpoints: latency-svc-wlnbm [1.302793553s]
Feb 24 01:09:42.532: INFO: Created: latency-svc-2m49q
Feb 24 01:09:42.557: INFO: Got endpoints: latency-svc-2m49q [1.297991038s]
Feb 24 01:09:42.611: INFO: Created: latency-svc-ctg8w
Feb 24 01:09:42.617: INFO: Got endpoints: latency-svc-ctg8w [1.194171651s]
Feb 24 01:09:42.644: INFO: Created: latency-svc-rxfgk
Feb 24 01:09:42.674: INFO: Got endpoints: latency-svc-rxfgk [1.091372508s]
Feb 24 01:09:42.703: INFO: Created: latency-svc-vjs75
Feb 24 01:09:42.755: INFO: Got endpoints: latency-svc-vjs75 [1.105413821s]
Feb 24 01:09:42.778: INFO: Created: latency-svc-cjsxf
Feb 24 01:09:42.791: INFO: Got endpoints: latency-svc-cjsxf [836.159387ms]
Feb 24 01:09:42.825: INFO: Created: latency-svc-fhrt7
Feb 24 01:09:42.835: INFO: Got endpoints: latency-svc-fhrt7 [862.307672ms]
Feb 24 01:09:42.899: INFO: Created: latency-svc-9nglq
Feb 24 01:09:42.901: INFO: Got endpoints: latency-svc-9nglq [871.68703ms]
Feb 24 01:09:42.936: INFO: Created: latency-svc-5xpzl
Feb 24 01:09:42.939: INFO: Got endpoints: latency-svc-5xpzl [812.230641ms]
Feb 24 01:09:42.966: INFO: Created: latency-svc-smwzw
Feb 24 01:09:42.977: INFO: Got endpoints: latency-svc-smwzw [828.211014ms]
Feb 24 01:09:43.031: INFO: Created: latency-svc-l5l7w
Feb 24 01:09:43.035: INFO: Got endpoints: latency-svc-l5l7w [880.665677ms]
Feb 24 01:09:43.069: INFO: Created: latency-svc-4ck5l
Feb 24 01:09:43.075: INFO: Got endpoints: latency-svc-4ck5l [823.423059ms]
Feb 24 01:09:43.098: INFO: Created: latency-svc-2wgc7
Feb 24 01:09:43.118: INFO: Got endpoints: latency-svc-2wgc7 [860.846105ms]
Feb 24 01:09:43.182: INFO: Created: latency-svc-bwsfq
Feb 24 01:09:43.185: INFO: Got endpoints: latency-svc-bwsfq [841.819947ms]
Feb 24 01:09:43.229: INFO: Created: latency-svc-k8bnp
Feb 24 01:09:43.242: INFO: Got endpoints: latency-svc-k8bnp [762.70936ms]
Feb 24 01:09:43.273: INFO: Created: latency-svc-z4qmh
Feb 24 01:09:43.329: INFO: Got endpoints: latency-svc-z4qmh [796.532795ms]
Feb 24 01:09:43.363: INFO: Created: latency-svc-lsf6p
Feb 24 01:09:43.369: INFO: Got endpoints: latency-svc-lsf6p [811.807203ms]
Feb 24 01:09:43.424: INFO: Created: latency-svc-fp8qw
Feb 24 01:09:43.540: INFO: Got endpoints: latency-svc-fp8qw [922.507002ms]
Feb 24 01:09:43.543: INFO: Created: latency-svc-tm9qn
Feb 24 01:09:43.552: INFO: Got endpoints: latency-svc-tm9qn [878.365411ms]
Feb 24 01:09:44.278: INFO: Created: latency-svc-k79lx
Feb 24 01:09:44.310: INFO: Got endpoints: latency-svc-k79lx [1.553982306s]
Feb 24 01:09:44.321: INFO: Created: latency-svc-mkdqt
Feb 24 01:09:44.322: INFO: Got endpoints: latency-svc-mkdqt [1.530514674s]
Feb 24 01:09:44.358: INFO: Created: latency-svc-7sfgp
Feb 24 01:09:44.361: INFO: Got endpoints: latency-svc-7sfgp [1.526243078s]
Feb 24 01:09:44.440: INFO: Created: latency-svc-t9fv2
Feb 24 01:09:44.452: INFO: Got endpoints: latency-svc-t9fv2 [1.55156655s]
Feb 24 01:09:44.480: INFO: Created: latency-svc-q6665
Feb 24 01:09:44.490: INFO: Got endpoints: latency-svc-q6665 [1.55020038s]
Feb 24 01:09:44.517: INFO: Created: latency-svc-rlfzn
Feb 24 01:09:44.524: INFO: Got endpoints: latency-svc-rlfzn [1.546154925s]
Feb 24 01:09:44.585: INFO: Created: latency-svc-c8d85
Feb 24 01:09:44.587: INFO: Got endpoints: latency-svc-c8d85 [1.552577187s]
Feb 24 01:09:44.645: INFO: Created: latency-svc-cwhf6
Feb 24 01:09:44.654: INFO: Got endpoints: latency-svc-cwhf6 [1.578848795s]
Feb 24 01:09:44.684: INFO: Created: latency-svc-4h6rn
Feb 24 01:09:44.772: INFO: Created: latency-svc-mxsdl
Feb 24 01:09:44.771: INFO: Got endpoints: latency-svc-4h6rn [1.65370337s]
Feb 24 01:09:44.777: INFO: Got endpoints: latency-svc-mxsdl [1.59264932s]
Feb 24 01:09:44.812: INFO: Created: latency-svc-6n5z8
Feb 24 01:09:44.816: INFO: Got endpoints: latency-svc-6n5z8 [1.572764438s]
Feb 24 01:09:44.839: INFO: Created: latency-svc-lp2mc
Feb 24 01:09:44.844: INFO: Got endpoints: latency-svc-lp2mc [1.515028393s]
Feb 24 01:09:44.868: INFO: Created: latency-svc-gf5qz
Feb 24 01:09:44.934: INFO: Got endpoints: latency-svc-gf5qz [1.564378937s]
Feb 24 01:09:44.944: INFO: Created: latency-svc-29pmh
Feb 24 01:09:44.953: INFO: Got endpoints: latency-svc-29pmh [1.413109487s]
Feb 24 01:09:44.978: INFO: Created: latency-svc-q8hpt
Feb 24 01:09:45.002: INFO: Got endpoints: latency-svc-q8hpt [1.44969696s]
Feb 24 01:09:45.033: INFO: Created: latency-svc-h5xfr
Feb 24 01:09:45.077: INFO: Got endpoints: latency-svc-h5xfr [767.192496ms]
Feb 24 01:09:45.093: INFO: Created: latency-svc-q89mk
Feb 24 01:09:45.118: INFO: Got endpoints: latency-svc-q89mk [796.193111ms]
Feb 24 01:09:45.133: INFO: Created: latency-svc-msww9
Feb 24 01:09:45.161: INFO: Got endpoints: latency-svc-msww9 [799.64136ms]
Feb 24 01:09:45.166: INFO: Created: latency-svc-2s9fw
Feb 24 01:09:45.231: INFO: Got endpoints: latency-svc-2s9fw [778.334243ms]
Feb 24 01:09:45.233: INFO: Created: latency-svc-wtwrq
Feb 24 01:09:45.247: INFO: Got endpoints: latency-svc-wtwrq [757.277506ms]
Feb 24 01:09:45.274: INFO: Created: latency-svc-d2dj5
Feb 24 01:09:45.296: INFO: Got endpoints: latency-svc-d2dj5 [772.032979ms]
Feb 24 01:09:45.385: INFO: Created: latency-svc-fk2zz
Feb 24 01:09:45.393: INFO: Got endpoints: latency-svc-fk2zz [805.189572ms]
Feb 24 01:09:45.427: INFO: Created: latency-svc-94pt5
Feb 24 01:09:45.430: INFO: Got endpoints: latency-svc-94pt5 [775.570121ms]
Feb 24 01:09:45.473: INFO: Created: latency-svc-xwdjn
Feb 24 01:09:45.571: INFO: Got endpoints: latency-svc-xwdjn [799.318635ms]
Feb 24 01:09:45.579: INFO: Created: latency-svc-2hx6w
Feb 24 01:09:45.588: INFO: Got endpoints: latency-svc-2hx6w [810.347729ms]
Feb 24 01:09:45.615: INFO: Created: latency-svc-v9dfs
Feb 24 01:09:45.632: INFO: Got endpoints: latency-svc-v9dfs [816.526113ms]
Feb 24 01:09:45.635: INFO: Created: latency-svc-nqk84
Feb 24 01:09:45.661: INFO: Created: latency-svc-94pqj
Feb 24 01:09:45.661: INFO: Got endpoints: latency-svc-nqk84 [816.98227ms]
Feb 24 01:09:45.818: INFO: Created: latency-svc-xj8zs
Feb 24 01:09:45.820: INFO: Got endpoints: latency-svc-94pqj [886.231649ms]
Feb 24 01:09:45.849: INFO: Got endpoints: latency-svc-xj8zs [895.210921ms]
Feb 24 01:09:45.910: INFO: Created: latency-svc-p2ff7
Feb 24 01:09:45.960: INFO: Got endpoints: latency-svc-p2ff7 [957.56157ms]
Feb 24 01:09:45.989: INFO: Created: latency-svc-lxgt9
Feb 24 01:09:46.031: INFO: Created: latency-svc-tdjg9
Feb 24 01:09:46.032: INFO: Got endpoints: latency-svc-lxgt9 [954.397368ms]
Feb 24 01:09:46.033: INFO: Got endpoints: latency-svc-tdjg9 [914.544773ms]
Feb 24 01:09:46.104: INFO: Created: latency-svc-xqvxh
Feb 24 01:09:46.109: INFO: Got endpoints: latency-svc-xqvxh [948.117513ms]
Feb 24 01:09:46.175: INFO: Created: latency-svc-5xb49
Feb 24 01:09:46.175: INFO: Got endpoints: latency-svc-5xb49 [944.409937ms]
Feb 24 01:09:46.204: INFO: Created: latency-svc-trbsr
Feb 24 01:09:46.294: INFO: Got endpoints: latency-svc-trbsr [1.047154556s]
Feb 24 01:09:46.306: INFO: Created: latency-svc-cglms
Feb 24 01:09:46.332: INFO: Got endpoints: latency-svc-cglms [1.035823636s]
Feb 24 01:09:46.360: INFO: Created: latency-svc-qjnrv
Feb 24 01:09:46.365: INFO: Got endpoints: latency-svc-qjnrv [971.750481ms]
Feb 24 01:09:46.435: INFO: Created: latency-svc-4wbkw
Feb 24 01:09:46.439: INFO: Got endpoints: latency-svc-4wbkw [1.009497398s]
Feb 24 01:09:46.472: INFO: Created: latency-svc-8nrlq
Feb 24 01:09:46.498: INFO: Got endpoints: latency-svc-8nrlq [133.05122ms]
Feb 24 01:09:46.530: INFO: Created: latency-svc-rbs5z
Feb 24 01:09:46.612: INFO: Got endpoints: latency-svc-rbs5z [1.040977801s]
Feb 24 01:09:46.615: INFO: Created: latency-svc-d6pcs
Feb 24 01:09:46.628: INFO: Got endpoints: latency-svc-d6pcs [1.040642001s]
Feb 24 01:09:46.656: INFO: Created: latency-svc-xltr4
Feb 24 01:09:46.668: INFO: Got endpoints: latency-svc-xltr4 [1.035833819s]
Feb 24 01:09:46.690: INFO: Created: latency-svc-k7sks
Feb 24 01:09:46.710: INFO: Got endpoints: latency-svc-k7sks [1.049403009s]
Feb 24 01:09:46.713: INFO: Created: latency-svc-4sxtc
Feb 24 01:09:46.773: INFO: Got endpoints: latency-svc-4sxtc [952.742822ms]
Feb 24 01:09:46.778: INFO: Created: latency-svc-xpnnc
Feb 24 01:09:46.787: INFO: Got endpoints: latency-svc-xpnnc [937.758162ms]
Feb 24 01:09:46.815: INFO: Created: latency-svc-b8mtv
Feb 24 01:09:46.824: INFO: Got endpoints: latency-svc-b8mtv [863.962019ms]
Feb 24 01:09:46.836: INFO: Created: latency-svc-k8jzh
Feb 24 01:09:46.841: INFO: Got endpoints: latency-svc-k8jzh [807.830946ms]
Feb 24 01:09:46.864: INFO: Created: latency-svc-46dgq
Feb 24 01:09:46.866: INFO: Got endpoints: latency-svc-46dgq [834.292555ms]
Feb 24 01:09:46.929: INFO: Created: latency-svc-hh9l8
Feb 24 01:09:46.956: INFO: Got endpoints: latency-svc-hh9l8 [846.862108ms]
Feb 24 01:09:47.007: INFO: Created: latency-svc-znmmr
Feb 24 01:09:47.014: INFO: Got endpoints: latency-svc-znmmr [838.405593ms]
Feb 24 01:09:47.101: INFO: Created: latency-svc-fmffb
Feb 24 01:09:47.131: INFO: Got endpoints: latency-svc-fmffb [836.819871ms]
Feb 24 01:09:47.134: INFO: Created: latency-svc-6lnw2
Feb 24 01:09:47.155: INFO: Got endpoints: latency-svc-6lnw2 [823.102126ms]
Feb 24 01:09:47.180: INFO: Created: latency-svc-ck4d2
Feb 24 01:09:47.191: INFO: Got endpoints: latency-svc-ck4d2 [751.547077ms]
Feb 24 01:09:47.257: INFO: Created: latency-svc-txk69
Feb 24 01:09:47.270: INFO: Got endpoints: latency-svc-txk69 [771.812538ms]
Feb 24 01:09:47.356: INFO: Created: latency-svc-xfcjq
Feb 24 01:09:47.402: INFO: Got endpoints: latency-svc-xfcjq [789.859264ms]
Feb 24 01:09:47.430: INFO: Created: latency-svc-qdqww
Feb 24 01:09:47.447: INFO: Got endpoints: latency-svc-qdqww [817.916229ms]
Feb 24 01:09:47.470: INFO: Created: latency-svc-jdz8g
Feb 24 01:09:47.481: INFO: Got endpoints: latency-svc-jdz8g [812.666433ms]
Feb 24 01:09:47.597: INFO: Created: latency-svc-w7dsg
Feb 24 01:09:47.618: INFO: Got endpoints: latency-svc-w7dsg [907.49919ms]
Feb 24 01:09:47.645: INFO: Created: latency-svc-xdfkh
Feb 24 01:09:47.661: INFO: Created: latency-svc-bftjj
Feb 24 01:09:47.663: INFO: Got endpoints: latency-svc-xdfkh [890.150837ms]
Feb 24 01:09:47.684: INFO: Created: latency-svc-h79m5
Feb 24 01:09:47.685: INFO: Got endpoints: latency-svc-bftjj [897.581964ms]
Feb 24 01:09:47.774: INFO: Got endpoints: latency-svc-h79m5 [949.892414ms]
Feb 24 01:09:47.783: INFO: Created: latency-svc-zfr7s
Feb 24 01:09:47.788: INFO: Got endpoints: latency-svc-zfr7s [947.327734ms]
Feb 24 01:09:47.856: INFO: Created: latency-svc-2w9pz
Feb 24 01:09:47.866: INFO: Got endpoints: latency-svc-2w9pz [999.663641ms]
Feb 24 01:09:47.930: INFO: Created: latency-svc-pz9mb
Feb 24 01:09:47.938: INFO: Got endpoints: latency-svc-pz9mb [981.881578ms]
Feb 24 01:09:47.976: INFO: Created: latency-svc-f6987
Feb 24 01:09:47.985: INFO: Got endpoints: latency-svc-f6987 [971.533414ms]
Feb 24 01:09:48.011: INFO: Created: latency-svc-7r4wg
Feb 24 01:09:48.023: INFO: Got endpoints: latency-svc-7r4wg [891.282947ms]
Feb 24 01:09:48.077: INFO: Created: latency-svc-kz6qh
Feb 24 01:09:48.083: INFO: Got endpoints: latency-svc-kz6qh [928.268149ms]
Feb 24 01:09:48.104: INFO: Created: latency-svc-fkfdw
Feb 24 01:09:48.112: INFO: Got endpoints: latency-svc-fkfdw [921.13653ms]
Feb 24 01:09:48.128: INFO: Created: latency-svc-25kxr
Feb 24 01:09:48.144: INFO: Created: latency-svc-dnfgz
Feb 24 01:09:48.144: INFO: Got endpoints: latency-svc-25kxr [874.075241ms]
Feb 24 01:09:48.148: INFO: Got endpoints: latency-svc-dnfgz [744.789093ms]
Feb 24 01:09:48.173: INFO: Created: latency-svc-xn2tm
Feb 24 01:09:48.224: INFO: Got endpoints: latency-svc-xn2tm [776.818415ms]
Feb 24 01:09:48.251: INFO: Created: latency-svc-4kdkr
Feb 24 01:09:48.254: INFO: Got endpoints: latency-svc-4kdkr [773.104151ms]
Feb 24 01:09:48.305: INFO: Created: latency-svc-kvt6t
Feb 24 01:09:48.310: INFO: Got endpoints: latency-svc-kvt6t [691.503847ms]
Feb 24 01:09:48.409: INFO: Created: latency-svc-2nb6q
Feb 24 01:09:48.432: INFO: Got endpoints: latency-svc-2nb6q [768.07699ms]
Feb 24 01:09:48.432: INFO: Created: latency-svc-nm225
Feb 24 01:09:48.470: INFO: Got endpoints: latency-svc-nm225 [785.331226ms]
Feb 24 01:09:48.503: INFO: Created: latency-svc-tqdx8
Feb 24 01:09:48.503: INFO: Got endpoints: latency-svc-tqdx8 [728.852375ms]
Feb 24 01:09:48.568: INFO: Created: latency-svc-qdd47
Feb 24 01:09:48.593: INFO: Got endpoints: latency-svc-qdd47 [804.259279ms]
Feb 24 01:09:48.595: INFO: Created: latency-svc-5jzdx
Feb 24 01:09:48.601: INFO: Got endpoints: latency-svc-5jzdx [734.785188ms]
Feb 24 01:09:48.640: INFO: Created: latency-svc-cx6v9
Feb 24 01:09:48.645: INFO: Got endpoints: latency-svc-cx6v9 [706.695692ms]
Feb 24 01:09:48.705: INFO: Created: latency-svc-ptklp
Feb 24 01:09:48.709: INFO: Got endpoints: latency-svc-ptklp [723.93412ms]
Feb 24 01:09:48.746: INFO: Created: latency-svc-6m6x2
Feb 24 01:09:48.774: INFO: Got endpoints: latency-svc-6m6x2 [750.614262ms]
Feb 24 01:09:48.775: INFO: Created: latency-svc-ldr97
Feb 24 01:09:48.783: INFO: Got endpoints: latency-svc-ldr97 [699.204021ms]
Feb 24 01:09:48.864: INFO: Created: latency-svc-x4gfk
Feb 24 01:09:48.897: INFO: Got endpoints: latency-svc-x4gfk [785.009353ms]
Feb 24 01:09:48.900: INFO: Created: latency-svc-qscjf
Feb 24 01:09:48.908: INFO: Got endpoints: latency-svc-qscjf [763.426292ms]
Feb 24 01:09:48.933: INFO: Created: latency-svc-7zx4x
Feb 24 01:09:48.947: INFO: Got endpoints: latency-svc-7zx4x [799.477053ms]
Feb 24 01:09:49.033: INFO: Created: latency-svc-hn76c
Feb 24 01:09:49.037: INFO: Got endpoints: latency-svc-hn76c [813.057837ms]
Feb 24 01:09:49.118: INFO: Created: latency-svc-md2gh
Feb 24 01:09:49.174: INFO: Got endpoints: latency-svc-md2gh [919.282326ms]
Feb 24 01:09:49.187: INFO: Created: latency-svc-t2l95
Feb 24 01:09:49.196: INFO: Got endpoints: latency-svc-t2l95 [886.499414ms]
Feb 24 01:09:49.221: INFO: Created: latency-svc-jskvv
Feb 24 01:09:49.228: INFO: Got endpoints: latency-svc-jskvv [795.728657ms]
Feb 24 01:09:49.385: INFO: Created: latency-svc-fm2g2
Feb 24 01:09:49.440: INFO: Got endpoints: latency-svc-fm2g2 [969.363436ms]
Feb 24 01:09:49.445: INFO: Created: latency-svc-gc7qq
Feb 24 01:09:49.598: INFO: Got endpoints: latency-svc-gc7qq [1.095138384s]
Feb 24 01:09:49.643: INFO: Created: latency-svc-gd5qz
Feb 24 01:09:49.649: INFO: Got endpoints: latency-svc-gd5qz [1.055584612s]
Feb 24 01:09:49.682: INFO: Created: latency-svc-nwj98
Feb 24 01:09:49.781: INFO: Got endpoints: latency-svc-nwj98 [1.179683144s]
Feb 24 01:09:49.801: INFO: Created: latency-svc-6jdhc
Feb 24 01:09:49.860: INFO: Got endpoints: latency-svc-6jdhc [1.214786305s]
Feb 24 01:09:50.014: INFO: Created: latency-svc-vln9p
Feb 24 01:09:50.016: INFO: Got endpoints: latency-svc-vln9p [1.306408101s]
Feb 24 01:09:50.048: INFO: Created: latency-svc-jzvcd
Feb 24 01:09:50.052: INFO: Got endpoints: latency-svc-jzvcd [1.278584643s]
Feb 24 01:09:50.070: INFO: Created: latency-svc-sz6sl
Feb 24 01:09:50.090: INFO: Got endpoints: latency-svc-sz6sl [1.306415697s]
Feb 24 01:09:50.094: INFO: Created: latency-svc-5z9vk
Feb 24 01:09:50.145: INFO: Got endpoints: latency-svc-5z9vk [1.246909303s]
Feb 24 01:09:50.146: INFO: Created: latency-svc-jw2k8
Feb 24 01:09:50.179: INFO: Got endpoints: latency-svc-jw2k8 [1.27133629s]
Feb 24 01:09:50.186: INFO: Created: latency-svc-9s2vd
Feb 24 01:09:50.212: INFO: Got endpoints: latency-svc-9s2vd [1.265093741s]
Feb 24 01:09:50.239: INFO: Created: latency-svc-9pxn9
Feb 24 01:09:50.306: INFO: Got endpoints: latency-svc-9pxn9 [1.268852434s]
Feb 24 01:09:50.336: INFO: Created: latency-svc-zrcld
Feb 24 01:09:50.349: INFO: Got endpoints: latency-svc-zrcld [1.174965471s]
Feb 24 01:09:50.371: INFO: Created: latency-svc-gv8jw
Feb 24 01:09:50.387: INFO: Got endpoints: latency-svc-gv8jw [1.190022032s]
Feb 24 01:09:50.403: INFO: Created: latency-svc-zt4jt
Feb 24 01:09:50.473: INFO: Got endpoints: latency-svc-zt4jt [1.245236429s]
Feb 24 01:09:50.475: INFO: Created: latency-svc-cfqv8
Feb 24 01:09:50.484: INFO: Got endpoints: latency-svc-cfqv8 [1.044076676s]
Feb 24 01:09:50.518: INFO: Created: latency-svc-nwvsp
Feb 24 01:09:50.526: INFO: Got endpoints: latency-svc-nwvsp [927.871431ms]
Feb 24 01:09:50.612: INFO: Created: latency-svc-rnmhp
Feb 24 01:09:50.615: INFO: Got endpoints: latency-svc-rnmhp [965.276832ms]
Feb 24 01:09:50.647: INFO: Created: latency-svc-jr2zw
Feb 24 01:09:50.659: INFO: Got endpoints: latency-svc-jr2zw [877.237545ms]
Feb 24 01:09:50.683: INFO: Created: latency-svc-f2vxc
Feb 24 01:09:50.695: INFO: Got endpoints: latency-svc-f2vxc [834.555326ms]
Feb 24 01:09:50.755: INFO: Created: latency-svc-mmvx4
Feb 24 01:09:50.758: INFO: Got endpoints: latency-svc-mmvx4 [742.037991ms]
Feb 24 01:09:50.785: INFO: Created: latency-svc-thtq2
Feb 24 01:09:50.790: INFO: Got endpoints: latency-svc-thtq2 [737.539953ms]
Feb 24 01:09:50.813: INFO: Created: latency-svc-zb2hs
Feb 24 01:09:50.829: INFO: Got endpoints: latency-svc-zb2hs [739.125506ms]
Feb 24 01:09:50.848: INFO: Created: latency-svc-mrbxj
Feb 24 01:09:50.959: INFO: Got endpoints: latency-svc-mrbxj [814.435623ms]
Feb 24 01:09:50.966: INFO: Created: latency-svc-2wh27
Feb 24 01:09:50.971: INFO: Got endpoints: latency-svc-2wh27 [791.843541ms]
Feb 24 01:09:51.007: INFO: Created: latency-svc-r5hln
Feb 24 01:09:51.028: INFO: Got endpoints: latency-svc-r5hln [815.394683ms]
Feb 24 01:09:51.115: INFO: Created: latency-svc-dvh2m
Feb 24 01:09:51.130: INFO: Got endpoints: latency-svc-dvh2m [824.546425ms]
Feb 24 01:09:51.151: INFO: Created: latency-svc-wzb8s
Feb 24 01:09:51.155: INFO: Got endpoints: latency-svc-wzb8s [805.805844ms]
Feb 24 01:09:51.179: INFO: Created: latency-svc-2pmlg
Feb 24 01:09:51.199: INFO: Got endpoints: latency-svc-2pmlg [812.132351ms]
Feb 24 01:09:51.201: INFO: Created: latency-svc-5tz2t
Feb 24 01:09:51.256: INFO: Got endpoints: latency-svc-5tz2t [783.049812ms]
Feb 24 01:09:51.258: INFO: Created: latency-svc-vfz4p
Feb 24 01:09:51.301: INFO: Got endpoints: latency-svc-vfz4p [817.105012ms]
Feb 24 01:09:51.352: INFO: Created: latency-svc-mbpbn
Feb 24 01:09:51.420: INFO: Created: latency-svc-289zm
Feb 24 01:09:51.423: INFO: Got endpoints: latency-svc-mbpbn [895.953419ms]
Feb 24 01:09:51.434: INFO: Got endpoints: latency-svc-289zm [818.910619ms]
Feb 24 01:09:51.477: INFO: Created: latency-svc-4d9hk
Feb 24 01:09:51.487: INFO: Got endpoints: latency-svc-4d9hk [827.580746ms]
Feb 24 01:09:51.509: INFO: Created: latency-svc-sc2h2
Feb 24 01:09:51.577: INFO: Got endpoints: latency-svc-sc2h2 [881.598874ms]
Feb 24 01:09:51.607: INFO: Created: latency-svc-z784j
Feb 24 01:09:51.619: INFO: Got endpoints: latency-svc-z784j [860.21774ms]
Feb 24 01:09:51.658: INFO: Created: latency-svc-jzh6x
Feb 24 01:09:51.664: INFO: Got endpoints: latency-svc-jzh6x [873.909377ms]
Feb 24 01:09:51.732: INFO: Created: latency-svc-76mqc
Feb 24 01:09:51.769: INFO: Created: latency-svc-zxqgg
Feb 24 01:09:51.769: INFO: Got endpoints: latency-svc-76mqc [939.951586ms]
Feb 24 01:09:51.785: INFO: Got endpoints: latency-svc-zxqgg [825.509987ms]
Feb 24 01:09:51.822: INFO: Created: latency-svc-rmqxg
Feb 24 01:09:51.966: INFO: Got endpoints: latency-svc-rmqxg [994.02491ms]
Feb 24 01:09:52.017: INFO: Created: latency-svc-wgznb
Feb 24 01:09:52.044: INFO: Got endpoints: latency-svc-wgznb [1.016289128s]
Feb 24 01:09:52.047: INFO: Created: latency-svc-d464t
Feb 24 01:09:52.054: INFO: Got endpoints: latency-svc-d464t [923.360482ms]
Feb 24 01:09:52.054: INFO: Latencies: [120.363975ms 133.05122ms 135.755676ms 161.904348ms 197.809066ms 387.594886ms 500.700381ms 691.503847ms 699.204021ms 706.695692ms 723.93412ms 728.852375ms 734.785188ms 737.539953ms 739.125506ms 742.037991ms 744.789093ms 750.614262ms 751.547077ms 757.277506ms 762.70936ms 763.426292ms 767.192496ms 768.07699ms 771.812538ms 772.032979ms 773.104151ms 775.570121ms 776.818415ms 778.334243ms 783.049812ms 785.009353ms 785.331226ms 786.43222ms 789.859264ms 791.843541ms 795.728657ms 796.193111ms 796.532795ms 799.318635ms 799.477053ms 799.64136ms 804.259279ms 805.189572ms 805.796377ms 805.805844ms 807.830946ms 810.347729ms 811.807203ms 812.132351ms 812.230641ms 812.666433ms 813.057837ms 814.435623ms 815.394683ms 816.526113ms 816.98227ms 817.105012ms 817.916229ms 818.910619ms 823.102126ms 823.423059ms 824.546425ms 825.509987ms 827.580746ms 828.211014ms 834.292555ms 834.555326ms 836.159387ms 836.819871ms 838.405593ms 841.819947ms 843.9741ms 846.862108ms 860.21774ms 860.846105ms 862.307672ms 863.962019ms 871.68703ms 873.909377ms 874.075241ms 877.237545ms 878.365411ms 880.665677ms 881.598874ms 886.231649ms 886.499414ms 890.150837ms 891.282947ms 893.224275ms 895.210921ms 895.953419ms 897.581964ms 907.49919ms 914.544773ms 917.470511ms 919.282326ms 921.13653ms 922.507002ms 923.360482ms 927.871431ms 928.268149ms 928.409271ms 937.758162ms 939.951586ms 944.409937ms 947.327734ms 948.117513ms 949.892414ms 952.742822ms 954.397368ms 954.774414ms 957.56157ms 958.137921ms 958.177934ms 965.276832ms 969.363436ms 971.533414ms 971.750481ms 972.098032ms 981.881578ms 994.02491ms 999.663641ms 1.009497398s 1.016289128s 1.025773167s 1.02638214s 1.035823636s 1.035833819s 1.040642001s 1.040977801s 1.041817724s 1.044076676s 1.047154556s 1.049403009s 1.055584612s 1.062093438s 1.067674975s 1.073067494s 1.078977632s 1.091372508s 1.095138384s 1.105413821s 1.139386374s 1.169613358s 1.174965471s 1.179683144s 1.18588464s 1.190022032s 1.194171651s 1.205841457s 1.214786305s 1.245236429s 1.246194978s 1.246909303s 1.251452452s 1.265093741s 1.268852434s 1.27133629s 1.278584643s 1.288202616s 1.297991038s 1.300116107s 1.302793553s 1.306408101s 1.306415697s 1.312392683s 1.31647614s 1.345272943s 1.354357562s 1.370690277s 1.38099398s 1.396191854s 1.399098407s 1.406438688s 1.408028903s 1.413109487s 1.44969696s 1.5090712s 1.515028393s 1.526243078s 1.530514674s 1.542688841s 1.546154925s 1.546592397s 1.55020038s 1.55156655s 1.55221671s 1.552577187s 1.553982306s 1.564378937s 1.572764438s 1.578848795s 1.59264932s 1.634325751s 1.647366221s 1.65370337s 1.695766061s 1.696412627s 1.741885009s]
Feb 24 01:09:52.054: INFO: 50 %ile: 927.871431ms
Feb 24 01:09:52.055: INFO: 90 %ile: 1.526243078s
Feb 24 01:09:52.055: INFO: 99 %ile: 1.696412627s
Feb 24 01:09:52.055: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:09:52.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-5083" for this suite.

• [SLOW TEST:22.611 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":280,"completed":197,"skipped":3261,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:09:52.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:10:02.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8349" for this suite.

• [SLOW TEST:10.653 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":198,"skipped":3294,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:10:02.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 24 01:10:29.771: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 24 01:10:29.779: INFO: Pod pod-with-prestop-http-hook still exists
Feb 24 01:10:31.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 24 01:10:31.855: INFO: Pod pod-with-prestop-http-hook still exists
Feb 24 01:10:33.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 24 01:10:33.787: INFO: Pod pod-with-prestop-http-hook still exists
Feb 24 01:10:35.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 24 01:10:35.801: INFO: Pod pod-with-prestop-http-hook still exists
Feb 24 01:10:37.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 24 01:10:37.789: INFO: Pod pod-with-prestop-http-hook still exists
Feb 24 01:10:39.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 24 01:10:39.817: INFO: Pod pod-with-prestop-http-hook still exists
Feb 24 01:10:41.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 24 01:10:41.857: INFO: Pod pod-with-prestop-http-hook still exists
Feb 24 01:10:43.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 24 01:10:43.794: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:10:43.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4209" for this suite.

• [SLOW TEST:40.968 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":199,"skipped":3336,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:10:43.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 24 01:10:53.734: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:10:53.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2998" for this suite.

• [SLOW TEST:9.986 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":200,"skipped":3348,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:10:53.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0224 01:10:55.203433      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 24 01:10:55.203: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:10:55.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1044" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":201,"skipped":3371,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:10:55.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 24 01:10:55.399: INFO: Waiting up to 5m0s for pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e" in namespace "emptydir-1601" to be "success or failure"
Feb 24 01:10:55.422: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.625407ms
Feb 24 01:10:57.456: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05638564s
Feb 24 01:10:59.478: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078450239s
Feb 24 01:11:01.754: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.355059308s
Feb 24 01:11:05.114: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.714532639s
Feb 24 01:11:07.703: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.303989834s
Feb 24 01:11:09.716: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.316562745s
Feb 24 01:11:11.722: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.32307772s
Feb 24 01:11:13.735: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.336159674s
STEP: Saw pod success
Feb 24 01:11:13.735: INFO: Pod "pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e" satisfied condition "success or failure"
Feb 24 01:11:13.740: INFO: Trying to get logs from node jerma-node pod pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e container test-container: 
STEP: delete the pod
Feb 24 01:11:14.004: INFO: Waiting for pod pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e to disappear
Feb 24 01:11:14.012: INFO: Pod pod-179547d6-7505-46c2-9f5f-0deabf0bcd7e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:11:14.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1601" for this suite.

• [SLOW TEST:18.800 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":202,"skipped":3372,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:11:14.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 24 01:11:14.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2975'
Feb 24 01:11:14.303: INFO: stderr: ""
Feb 24 01:11:14.303: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb 24 01:11:24.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2975 -o json'
Feb 24 01:11:24.440: INFO: stderr: ""
Feb 24 01:11:24.440: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-24T01:11:14Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2975\",\n        \"resourceVersion\": \"10338937\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2975/pods/e2e-test-httpd-pod\",\n        \"uid\": \"63e38445-5799-4d82-868e-e753c28d6c80\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-5g74k\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-5g74k\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-5g74k\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-24T01:11:14Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-24T01:11:20Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-24T01:11:20Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-24T01:11:14Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://683fec6e6ed31c75d95be1839caa94f66bda22799230b62b421538f7d6043982\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-24T01:11:19Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-24T01:11:14Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 24 01:11:24.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2975'
Feb 24 01:11:24.821: INFO: stderr: ""
Feb 24 01:11:24.822: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904
Feb 24 01:11:24.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2975'
Feb 24 01:11:31.205: INFO: stderr: ""
Feb 24 01:11:31.205: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:11:31.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2975" for this suite.

• [SLOW TEST:17.192 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":280,"completed":203,"skipped":3385,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:11:31.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-961f276a-b4d2-4a14-ac54-23ce0e39ac4d in namespace container-probe-961
Feb 24 01:11:39.464: INFO: Started pod busybox-961f276a-b4d2-4a14-ac54-23ce0e39ac4d in namespace container-probe-961
STEP: checking the pod's current state and verifying that restartCount is present
Feb 24 01:11:39.471: INFO: Initial restart count of pod busybox-961f276a-b4d2-4a14-ac54-23ce0e39ac4d is 0
Feb 24 01:12:28.081: INFO: Restart count of pod container-probe-961/busybox-961f276a-b4d2-4a14-ac54-23ce0e39ac4d is now 1 (48.609870975s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:12:28.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-961" for this suite.

• [SLOW TEST:56.906 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":204,"skipped":3430,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:12:28.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 24 01:12:28.241: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-a 1465a343-dd97-4f4a-90af-921ac40a6193 10339139 0 2020-02-24 01:12:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 24 01:12:28.241: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-a 1465a343-dd97-4f4a-90af-921ac40a6193 10339139 0 2020-02-24 01:12:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 24 01:12:38.252: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-a 1465a343-dd97-4f4a-90af-921ac40a6193 10339174 0 2020-02-24 01:12:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 24 01:12:38.252: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-a 1465a343-dd97-4f4a-90af-921ac40a6193 10339174 0 2020-02-24 01:12:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 24 01:12:48.266: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-a 1465a343-dd97-4f4a-90af-921ac40a6193 10339198 0 2020-02-24 01:12:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 24 01:12:48.266: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-a 1465a343-dd97-4f4a-90af-921ac40a6193 10339198 0 2020-02-24 01:12:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 24 01:12:58.275: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-a 1465a343-dd97-4f4a-90af-921ac40a6193 10339222 0 2020-02-24 01:12:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 24 01:12:58.276: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-a 1465a343-dd97-4f4a-90af-921ac40a6193 10339222 0 2020-02-24 01:12:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 24 01:13:08.288: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-b dabdbda6-85f7-4876-adee-28b6d5ede4e9 10339248 0 2020-02-24 01:13:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 24 01:13:08.288: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-b dabdbda6-85f7-4876-adee-28b6d5ede4e9 10339248 0 2020-02-24 01:13:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 24 01:13:18.303: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-b dabdbda6-85f7-4876-adee-28b6d5ede4e9 10339270 0 2020-02-24 01:13:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 24 01:13:18.303: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5449 /api/v1/namespaces/watch-5449/configmaps/e2e-watch-test-configmap-b dabdbda6-85f7-4876-adee-28b6d5ede4e9 10339270 0 2020-02-24 01:13:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:13:28.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5449" for this suite.

• [SLOW TEST:60.221 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":205,"skipped":3432,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:13:28.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Feb 24 01:13:28.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9946'
Feb 24 01:13:28.733: INFO: stderr: ""
Feb 24 01:13:28.733: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 24 01:13:29.745: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:29.745: INFO: Found 0 / 1
Feb 24 01:13:30.747: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:30.748: INFO: Found 0 / 1
Feb 24 01:13:31.748: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:31.748: INFO: Found 0 / 1
Feb 24 01:13:32.742: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:32.742: INFO: Found 0 / 1
Feb 24 01:13:34.277: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:34.278: INFO: Found 0 / 1
Feb 24 01:13:34.742: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:34.742: INFO: Found 0 / 1
Feb 24 01:13:35.743: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:35.743: INFO: Found 0 / 1
Feb 24 01:13:37.043: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:37.043: INFO: Found 1 / 1
Feb 24 01:13:37.043: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 24 01:13:37.060: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:37.060: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 24 01:13:37.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-btqdd --namespace=kubectl-9946 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 24 01:13:37.195: INFO: stderr: ""
Feb 24 01:13:37.195: INFO: stdout: "pod/agnhost-master-btqdd patched\n"
STEP: checking annotations
Feb 24 01:13:37.205: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:13:37.205: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:13:37.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9946" for this suite.

• [SLOW TEST:8.865 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":280,"completed":206,"skipped":3446,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:13:37.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-e25c3aa3-d613-417e-aaf9-f3f6460c598a
STEP: Creating a pod to test consume secrets
Feb 24 01:13:37.311: INFO: Waiting up to 5m0s for pod "pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995" in namespace "secrets-8557" to be "success or failure"
Feb 24 01:13:37.315: INFO: Pod "pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995": Phase="Pending", Reason="", readiness=false. Elapsed: 4.818496ms
Feb 24 01:13:39.323: INFO: Pod "pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012651562s
Feb 24 01:13:41.332: INFO: Pod "pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020987731s
Feb 24 01:13:43.343: INFO: Pod "pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032520555s
Feb 24 01:13:45.368: INFO: Pod "pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057337636s
Feb 24 01:13:47.525: INFO: Pod "pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.214557332s
STEP: Saw pod success
Feb 24 01:13:47.525: INFO: Pod "pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995" satisfied condition "success or failure"
Feb 24 01:13:47.566: INFO: Trying to get logs from node jerma-node pod pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995 container secret-volume-test: 
STEP: delete the pod
Feb 24 01:13:47.726: INFO: Waiting for pod pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995 to disappear
Feb 24 01:13:47.731: INFO: Pod pod-secrets-42e29712-cc38-48fd-8e70-f5f4cbe51995 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:13:47.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8557" for this suite.

• [SLOW TEST:10.531 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":207,"skipped":3451,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:13:47.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service multi-endpoint-test in namespace services-5741
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5741 to expose endpoints map[]
Feb 24 01:13:48.097: INFO: Get endpoints failed (13.810019ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 24 01:13:49.104: INFO: successfully validated that service multi-endpoint-test in namespace services-5741 exposes endpoints map[] (1.020002664s elapsed)
STEP: Creating pod pod1 in namespace services-5741
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5741 to expose endpoints map[pod1:[100]]
Feb 24 01:13:53.200: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.078675474s elapsed, will retry)
Feb 24 01:13:56.228: INFO: successfully validated that service multi-endpoint-test in namespace services-5741 exposes endpoints map[pod1:[100]] (7.106578287s elapsed)
STEP: Creating pod pod2 in namespace services-5741
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5741 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 24 01:14:00.520: INFO: Unexpected endpoints: found map[2cdb1362-1118-442b-bd42-ad24f280f9cd:[100]], expected map[pod1:[100] pod2:[101]] (4.283062737s elapsed, will retry)
Feb 24 01:14:04.563: INFO: successfully validated that service multi-endpoint-test in namespace services-5741 exposes endpoints map[pod1:[100] pod2:[101]] (8.326441985s elapsed)
STEP: Deleting pod pod1 in namespace services-5741
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5741 to expose endpoints map[pod2:[101]]
Feb 24 01:14:04.665: INFO: successfully validated that service multi-endpoint-test in namespace services-5741 exposes endpoints map[pod2:[101]] (66.387432ms elapsed)
STEP: Deleting pod pod2 in namespace services-5741
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5741 to expose endpoints map[]
Feb 24 01:14:04.752: INFO: successfully validated that service multi-endpoint-test in namespace services-5741 exposes endpoints map[] (57.877813ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:14:04.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5741" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:17.209 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":280,"completed":208,"skipped":3461,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:14:04.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 24 01:14:05.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4748'
Feb 24 01:14:05.198: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 24 01:14:05.198: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604
Feb 24 01:14:07.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4748'
Feb 24 01:14:08.382: INFO: stderr: ""
Feb 24 01:14:08.382: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:14:08.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4748" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":280,"completed":209,"skipped":3486,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:14:08.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:14:08.961: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d901e3df-9f4c-4a62-a818-36216dfd3ba2" in namespace "security-context-test-6248" to be "success or failure"
Feb 24 01:14:08.973: INFO: Pod "busybox-readonly-false-d901e3df-9f4c-4a62-a818-36216dfd3ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.577453ms
Feb 24 01:14:11.694: INFO: Pod "busybox-readonly-false-d901e3df-9f4c-4a62-a818-36216dfd3ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.73297025s
Feb 24 01:14:13.704: INFO: Pod "busybox-readonly-false-d901e3df-9f4c-4a62-a818-36216dfd3ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.742376909s
Feb 24 01:14:15.712: INFO: Pod "busybox-readonly-false-d901e3df-9f4c-4a62-a818-36216dfd3ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.751180859s
Feb 24 01:14:17.720: INFO: Pod "busybox-readonly-false-d901e3df-9f4c-4a62-a818-36216dfd3ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758475157s
Feb 24 01:14:19.726: INFO: Pod "busybox-readonly-false-d901e3df-9f4c-4a62-a818-36216dfd3ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.764726803s
Feb 24 01:14:19.726: INFO: Pod "busybox-readonly-false-d901e3df-9f4c-4a62-a818-36216dfd3ba2" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:14:19.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6248" for this suite.

• [SLOW TEST:11.294 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":210,"skipped":3492,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:14:19.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:14:19.901: INFO: Creating ReplicaSet my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e
Feb 24 01:14:19.945: INFO: Pod name my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e: Found 0 pods out of 1
Feb 24 01:14:24.971: INFO: Pod name my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e: Found 1 pods out of 1
Feb 24 01:14:24.971: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e" is running
Feb 24 01:14:28.999: INFO: Pod "my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e-dgvjm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 01:14:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 01:14:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 01:14:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-24 01:14:19 +0000 UTC Reason: Message:}])
Feb 24 01:14:28.999: INFO: Trying to dial the pod
Feb 24 01:14:34.024: INFO: Controller my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e: Got expected result from replica 1 [my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e-dgvjm]: "my-hostname-basic-09be5491-dfdf-4871-90a1-524db61eca8e-dgvjm", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:14:34.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7370" for this suite.

• [SLOW TEST:14.291 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":211,"skipped":3492,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:14:34.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 24 01:14:35.451: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 24 01:14:37.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:14:39.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:14:43.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:14:44.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:14:45.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:14:47.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718103675, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 01:14:50.545: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:14:50.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:14:51.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-9467" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:17.914 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":212,"skipped":3499,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:14:51.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:14:59.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5828" for this suite.

• [SLOW TEST:7.159 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":213,"skipped":3531,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:14:59.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-6cd68f61-9754-42fc-af89-62ef139241d4
STEP: Creating a pod to test consume configMaps
Feb 24 01:14:59.326: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6" in namespace "projected-2923" to be "success or failure"
Feb 24 01:14:59.366: INFO: Pod "pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6": Phase="Pending", Reason="", readiness=false. Elapsed: 39.64337ms
Feb 24 01:15:01.373: INFO: Pod "pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047003674s
Feb 24 01:15:03.493: INFO: Pod "pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16670713s
Feb 24 01:15:05.500: INFO: Pod "pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173278567s
Feb 24 01:15:07.508: INFO: Pod "pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.181400446s
STEP: Saw pod success
Feb 24 01:15:07.508: INFO: Pod "pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6" satisfied condition "success or failure"
Feb 24 01:15:07.512: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 24 01:15:07.560: INFO: Waiting for pod pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6 to disappear
Feb 24 01:15:07.623: INFO: Pod pod-projected-configmaps-fcccb603-adde-43d3-bfaa-68e5f61412d6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:15:07.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2923" for this suite.

• [SLOW TEST:8.593 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":214,"skipped":3568,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:15:07.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0224 01:15:51.862803      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 24 01:15:51.863: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:15:51.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5668" for this suite.

• [SLOW TEST:44.166 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":215,"skipped":3597,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:15:51.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-cf0929fa-fe03-4e62-a9d2-f7732b29339d
STEP: Creating a pod to test consume configMaps
Feb 24 01:15:52.058: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536" in namespace "projected-6074" to be "success or failure"
Feb 24 01:15:52.093: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Pending", Reason="", readiness=false. Elapsed: 35.01192ms
Feb 24 01:15:54.113: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054692008s
Feb 24 01:15:56.120: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061301101s
Feb 24 01:15:59.534: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Pending", Reason="", readiness=false. Elapsed: 7.475589915s
Feb 24 01:16:02.944: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Pending", Reason="", readiness=false. Elapsed: 10.885129976s
Feb 24 01:16:06.882: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Pending", Reason="", readiness=false. Elapsed: 14.823288378s
Feb 24 01:16:09.140: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Pending", Reason="", readiness=false. Elapsed: 17.081560505s
Feb 24 01:16:11.203: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Pending", Reason="", readiness=false. Elapsed: 19.1445671s
Feb 24 01:16:13.906: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Pending", Reason="", readiness=false. Elapsed: 21.847553721s
Feb 24 01:16:15.918: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.860105325s
STEP: Saw pod success
Feb 24 01:16:15.919: INFO: Pod "pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536" satisfied condition "success or failure"
Feb 24 01:16:16.018: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 24 01:16:16.189: INFO: Waiting for pod pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536 to disappear
Feb 24 01:16:16.194: INFO: Pod pod-projected-configmaps-5ecd522d-6bd8-445f-a5dd-3aba22df5536 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:16:16.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6074" for this suite.

• [SLOW TEST:24.330 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":216,"skipped":3603,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:16:16.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:16:16.416: INFO: Creating deployment "webserver-deployment"
Feb 24 01:16:16.425: INFO: Waiting for observed generation 1
Feb 24 01:16:18.849: INFO: Waiting for all required pods to come up
Feb 24 01:16:19.868: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 24 01:16:53.949: INFO: Waiting for deployment "webserver-deployment" to complete
Feb 24 01:16:53.958: INFO: Updating deployment "webserver-deployment" with a non-existent image
Feb 24 01:16:53.968: INFO: Updating deployment webserver-deployment
Feb 24 01:16:53.968: INFO: Waiting for observed generation 2
Feb 24 01:16:57.118: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 24 01:16:58.019: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 24 01:16:58.042: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 24 01:16:58.184: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 24 01:16:58.184: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 24 01:16:58.187: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 24 01:16:58.193: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Feb 24 01:16:58.193: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Feb 24 01:16:58.204: INFO: Updating deployment webserver-deployment
Feb 24 01:16:58.204: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Feb 24 01:16:59.387: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 24 01:17:04.023: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 24 01:17:08.705: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5011 /apis/apps/v1/namespaces/deployment-5011/deployments/webserver-deployment 027320c3-072d-4bce-9453-6fb28409a5da 10340474 3 2020-02-24 01:16:16 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00351c018  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-24 01:16:59 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-24 01:17:01 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Feb 24 01:17:09.690: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-5011 /apis/apps/v1/namespaces/deployment-5011/replicasets/webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 10340456 3 2020-02-24 01:16:53 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 027320c3-072d-4bce-9453-6fb28409a5da 0xc00351c4f7 0xc00351c4f8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00351c568  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 24 01:17:09.690: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Feb 24 01:17:09.690: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-5011 /apis/apps/v1/namespaces/deployment-5011/replicasets/webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 10340470 3 2020-02-24 01:16:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 027320c3-072d-4bce-9453-6fb28409a5da 0xc00351c437 0xc00351c438}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00351c498  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Feb 24 01:17:11.561: INFO: Pod "webserver-deployment-595b5b9587-2dxgw" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2dxgw webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-2dxgw a73d85e0-8a02-4fb1-b5c5-d6d0235e034a 10340315 0 2020-02-24 01:16:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351ca67 0xc00351ca68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-02-24 01:16:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 01:16:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://9158be767324609d6bc2b8e9c482d7c0acfc0085a1022ebc95f9d4ba85d80c72,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.562: INFO: Pod "webserver-deployment-595b5b9587-5lrrn" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5lrrn webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-5lrrn 6eab0748-5624-4c92-917e-03930be4a0fd 10340277 0 2020-02-24 01:16:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351cbe7 0xc00351cbe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-24 01:16:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 01:16:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://dc1e3321bca9c59f53251cdd835ed49c8df269cf45a9deef159dd0cca8f0de19,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.562: INFO: Pod "webserver-deployment-595b5b9587-64cd4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-64cd4 webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-64cd4 15e6908b-a1a4-4a40-a0d0-ba5c606f0936 10340309 0 2020-02-24 01:16:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351cd67 0xc00351cd68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-24 01:16:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 01:16:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3941a8dc8ce24994078ee4dcddf25965bc9bcd57d59c78894fe23654243b995d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.562: INFO: Pod "webserver-deployment-595b5b9587-87fsb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-87fsb webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-87fsb c216f7e2-6229-46bd-b847-d53cd08ae4a2 10340421 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351d087 0xc00351d088}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.563: INFO: Pod "webserver-deployment-595b5b9587-8cs6c" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8cs6c webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-8cs6c 0bc76941-065f-4350-85ba-dc568497054c 10340290 0 2020-02-24 01:16:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351d1c7 0xc00351d1c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-24 01:16:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 01:16:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4f78f875b43451f6c562eca99eedfa39423247ef2973d761c8a923dbe2ece878,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.563: INFO: Pod "webserver-deployment-595b5b9587-8ll56" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8ll56 webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-8ll56 46254ea4-ed0e-4d4e-9155-40837731507a 10340453 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351d347 0xc00351d348}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.563: INFO: Pod "webserver-deployment-595b5b9587-dn5fk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dn5fk webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-dn5fk b197aedc-368d-491f-b593-4bf03aa4ec92 10340265 0 2020-02-24 01:16:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351d467 0xc00351d468}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-02-24 01:16:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 01:16:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1242edcbe9960038a52b5df6e9f7cd76432ed2be3e6f06692528153938a2057b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.564: INFO: Pod "webserver-deployment-595b5b9587-dpvsr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dpvsr webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-dpvsr 247a58fb-e7fb-41e4-a333-43724213f9ae 10340479 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351d5d7 0xc00351d5d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-24 01:17:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.564: INFO: Pod "webserver-deployment-595b5b9587-gjs5p" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gjs5p webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-gjs5p aa2e8a4b-5c08-4c13-8b79-5c1f8996f79a 10340452 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351d727 0xc00351d728}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.564: INFO: Pod "webserver-deployment-595b5b9587-gsbqh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gsbqh webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-gsbqh 90fa96c8-45a9-4e2e-a558-1a34b4721516 10340428 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351d837 0xc00351d838}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.565: INFO: Pod "webserver-deployment-595b5b9587-lvh8q" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lvh8q webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-lvh8q 76669652-1231-4fcb-b2f4-9d83a040f79e 10340430 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351d957 0xc00351d958}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.565: INFO: Pod "webserver-deployment-595b5b9587-mcn22" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mcn22 webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-mcn22 6f5b22bf-b8ea-4a57-bf82-ad9b93cf4847 10340273 0 2020-02-24 01:16:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351da77 0xc00351da78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-24 01:16:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 01:16:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7ed00f82dcd9653cafcae1a3919e658f8223896f921794d657db321819db201b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.565: INFO: Pod "webserver-deployment-595b5b9587-nq64c" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nq64c webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-nq64c 1738ea67-3b66-4c13-b409-28fa93d24138 10340467 0 2020-02-24 01:16:58 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351dbe7 0xc00351dbe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-24 01:17:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.565: INFO: Pod "webserver-deployment-595b5b9587-nw6vg" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nw6vg webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-nw6vg 29b81694-ebad-48ee-8d30-8f3078589c48 10340279 0 2020-02-24 01:16:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351dd47 0xc00351dd48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-02-24 01:16:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 01:16:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://566f88462cf7281c3dc08d2cae746bd1bdf803c59967b0091e104435ebcf9f74,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.566: INFO: Pod "webserver-deployment-595b5b9587-qsvbb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qsvbb webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-qsvbb a63ea94d-27f5-4489-a814-745c2561d889 10340450 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351deb7 0xc00351deb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.566: INFO: Pod "webserver-deployment-595b5b9587-rfnkt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rfnkt webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-rfnkt 0f2b741c-7242-46b7-9800-8a6e608a8f76 10340482 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00351dfc7 0xc00351dfc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-24 01:17:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.566: INFO: Pod "webserver-deployment-595b5b9587-v5f6g" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-v5f6g webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-v5f6g f598d17f-fcb1-4316-b880-89731e2ec7e3 10340318 0 2020-02-24 01:16:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00113c177 0xc00113c178}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-02-24 01:16:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 01:16:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://812bb17e17ec74dbd4b8b7f14088e55ac32fb25d3d3515a6ab671782198b0d29,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.567: INFO: Pod "webserver-deployment-595b5b9587-w9kkh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w9kkh webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-w9kkh 4b7b4b3f-6285-4367-8c25-a9cf55aa976d 10340451 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00113c327 0xc00113c328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.567: INFO: Pod "webserver-deployment-595b5b9587-x2sv2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x2sv2 webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-x2sv2 add2b8b5-902e-415d-a15d-13aa100aed4a 10340454 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00113c437 0xc00113c438}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.567: INFO: Pod "webserver-deployment-595b5b9587-z4x2z" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-z4x2z webserver-deployment-595b5b9587- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-595b5b9587-z4x2z bc44aaf3-6728-4b9a-b4d5-1307ce2acb58 10340429 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f92efaa9-190f-4ab1-8d03-68177ad57ba1 0xc00113c577 0xc00113c578}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.567: INFO: Pod "webserver-deployment-c7997dcc8-57jcg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-57jcg webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-57jcg 9592ffde-df35-4eea-99e0-351dc14e17ab 10340469 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113c8f7 0xc00113c8f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-24 01:17:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.567: INFO: Pod "webserver-deployment-c7997dcc8-9jk2k" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9jk2k webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-9jk2k 194ca2a3-978c-44f8-9365-06a532fbeabd 10340476 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113cc57 0xc00113cc58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:17:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-24 01:17:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.568: INFO: Pod "webserver-deployment-c7997dcc8-bbl57" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bbl57 webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-bbl57 d12c6114-262d-4ee7-ba26-121b802bbae2 10340380 0 2020-02-24 01:16:54 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113cee7 0xc00113cee8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-24 01:16:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.568: INFO: Pod "webserver-deployment-c7997dcc8-bpcvl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bpcvl webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-bpcvl f53b59de-4313-43a6-a48e-454c700623fd 10340420 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113d0a7 0xc00113d0a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.568: INFO: Pod "webserver-deployment-c7997dcc8-f2g6j" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f2g6j webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-f2g6j e3b49e69-7e4e-4ead-a77c-e154a3f213dd 10340455 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113d2d7 0xc00113d2d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-24 01:16:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.569: INFO: Pod "webserver-deployment-c7997dcc8-fg6hn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fg6hn webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-fg6hn 5cb19f73-779b-473f-903e-7fdd9afe6c18 10340426 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113d457 0xc00113d458}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.569: INFO: Pod "webserver-deployment-c7997dcc8-kdm22" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kdm22 webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-kdm22 d028d4a4-26ae-4f0d-a08b-b02ddc40d7da 10340358 0 2020-02-24 01:16:54 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113d587 0xc00113d588}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-24 01:16:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.569: INFO: Pod "webserver-deployment-c7997dcc8-klbrg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-klbrg webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-klbrg 61c18be7-a612-4003-b192-aa98a1dae3fb 10340427 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113d6f7 0xc00113d6f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.569: INFO: Pod "webserver-deployment-c7997dcc8-sz9lk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sz9lk webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-sz9lk ded93d89-e1b3-48ed-a614-6310ba13ef95 10340384 0 2020-02-24 01:16:54 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113d867 0xc00113d868}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-24 01:16:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.570: INFO: Pod "webserver-deployment-c7997dcc8-tqwxh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tqwxh webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-tqwxh c491dd95-9c23-4c58-bd21-ea0876eb5340 10340375 0 2020-02-24 01:16:54 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113dc67 0xc00113dc68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-24 01:16:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.570: INFO: Pod "webserver-deployment-c7997dcc8-x7tnk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x7tnk webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-x7tnk 8a4ac5de-2793-4c43-9050-23cbfee563d1 10340371 0 2020-02-24 01:16:54 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc00113de77 0xc00113de78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-24 01:16:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.570: INFO: Pod "webserver-deployment-c7997dcc8-xh2x2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xh2x2 webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-xh2x2 3763b10d-e6a4-4af5-8275-8c608340f525 10340449 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc001a6e0a7 0xc001a6e0a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 24 01:17:11.570: INFO: Pod "webserver-deployment-c7997dcc8-zkxt2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zkxt2 webserver-deployment-c7997dcc8- deployment-5011 /api/v1/namespaces/deployment-5011/pods/webserver-deployment-c7997dcc8-zkxt2 d1e1fef7-9993-406d-8b4a-c4d20eb3eef0 10340431 0 2020-02-24 01:16:59 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c743028f-c889-49f4-8906-c70efd4dec21 0xc001a6e2f7 0xc001a6e2f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jgqjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jgqjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jgqjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:17:11.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5011" for this suite.

• [SLOW TEST:59.402 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":217,"skipped":3620,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:17:15.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:17:19.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb 24 01:17:21.837: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-24T01:17:21Z generation:1 name:name1 resourceVersion:10340533 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9f0fed54-0979-4e60-927c-d722ba81ba44] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb 24 01:17:31.936: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-24T01:17:31Z generation:1 name:name2 resourceVersion:10340611 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bf342918-a27f-4acb-b577-fb1c3ea7ed0d] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb 24 01:17:42.020: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-24T01:17:21Z generation:2 name:name1 resourceVersion:10340642 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9f0fed54-0979-4e60-927c-d722ba81ba44] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb 24 01:17:52.286: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-24T01:17:31Z generation:2 name:name2 resourceVersion:10340689 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bf342918-a27f-4acb-b577-fb1c3ea7ed0d] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb 24 01:18:02.301: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-24T01:17:21Z generation:2 name:name1 resourceVersion:10340759 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9f0fed54-0979-4e60-927c-d722ba81ba44] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb 24 01:18:12.313: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-24T01:17:31Z generation:2 name:name2 resourceVersion:10340785 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bf342918-a27f-4acb-b577-fb1c3ea7ed0d] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:18:22.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-5305" for this suite.

• [SLOW TEST:67.365 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":218,"skipped":3620,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:18:22.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-2074/configmap-test-b1560111-0ad1-4f72-a59b-d418b9ca2e24
STEP: Creating a pod to test consume configMaps
Feb 24 01:18:23.135: INFO: Waiting up to 5m0s for pod "pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2" in namespace "configmap-2074" to be "success or failure"
Feb 24 01:18:23.147: INFO: Pod "pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.497818ms
Feb 24 01:18:25.273: INFO: Pod "pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137853615s
Feb 24 01:18:27.280: INFO: Pod "pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144718379s
Feb 24 01:18:29.290: INFO: Pod "pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15558301s
Feb 24 01:18:31.754: INFO: Pod "pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.618887509s
Feb 24 01:18:33.772: INFO: Pod "pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.63719984s
STEP: Saw pod success
Feb 24 01:18:33.772: INFO: Pod "pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2" satisfied condition "success or failure"
Feb 24 01:18:33.786: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2 container env-test: 
STEP: delete the pod
Feb 24 01:18:34.047: INFO: Waiting for pod pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2 to disappear
Feb 24 01:18:34.055: INFO: Pod pod-configmaps-0033b44f-f074-442d-bbf6-18818c188ce2 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:18:34.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2074" for this suite.

• [SLOW TEST:11.092 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":219,"skipped":3644,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:18:34.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6636
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6636
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6636
Feb 24 01:18:34.386: INFO: Found 0 stateful pods, waiting for 1
Feb 24 01:18:44.393: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 24 01:18:44.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6636 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 24 01:18:44.776: INFO: stderr: "I0224 01:18:44.561376    4124 log.go:172] (0xc000b88e70) (0xc000b80320) Create stream\nI0224 01:18:44.561744    4124 log.go:172] (0xc000b88e70) (0xc000b80320) Stream added, broadcasting: 1\nI0224 01:18:44.567472    4124 log.go:172] (0xc000b88e70) Reply frame received for 1\nI0224 01:18:44.567567    4124 log.go:172] (0xc000b88e70) (0xc000a7a000) Create stream\nI0224 01:18:44.567578    4124 log.go:172] (0xc000b88e70) (0xc000a7a000) Stream added, broadcasting: 3\nI0224 01:18:44.569550    4124 log.go:172] (0xc000b88e70) Reply frame received for 3\nI0224 01:18:44.569608    4124 log.go:172] (0xc000b88e70) (0xc000968000) Create stream\nI0224 01:18:44.569624    4124 log.go:172] (0xc000b88e70) (0xc000968000) Stream added, broadcasting: 5\nI0224 01:18:44.571129    4124 log.go:172] (0xc000b88e70) Reply frame received for 5\nI0224 01:18:44.674211    4124 log.go:172] (0xc000b88e70) Data frame received for 5\nI0224 01:18:44.674253    4124 log.go:172] (0xc000968000) (5) Data frame handling\nI0224 01:18:44.674263    4124 log.go:172] (0xc000968000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 01:18:44.698388    4124 log.go:172] (0xc000b88e70) Data frame received for 3\nI0224 01:18:44.698398    4124 log.go:172] (0xc000a7a000) (3) Data frame handling\nI0224 01:18:44.698405    4124 log.go:172] (0xc000a7a000) (3) Data frame sent\nI0224 01:18:44.766306    4124 log.go:172] (0xc000b88e70) Data frame received for 1\nI0224 01:18:44.766363    4124 log.go:172] (0xc000b88e70) (0xc000a7a000) Stream removed, broadcasting: 3\nI0224 01:18:44.766419    4124 log.go:172] (0xc000b80320) (1) Data frame handling\nI0224 01:18:44.766442    4124 log.go:172] (0xc000b80320) (1) Data frame sent\nI0224 01:18:44.766471    4124 log.go:172] (0xc000b88e70) (0xc000968000) Stream removed, broadcasting: 5\nI0224 01:18:44.766495    4124 log.go:172] (0xc000b88e70) (0xc000b80320) Stream removed, broadcasting: 1\nI0224 01:18:44.767284    4124 log.go:172] (0xc000b88e70) Go away received\nI0224 01:18:44.767400    4124 log.go:172] (0xc000b88e70) (0xc000b80320) Stream removed, broadcasting: 1\nI0224 01:18:44.767504    4124 log.go:172] (0xc000b88e70) (0xc000a7a000) Stream removed, broadcasting: 3\nI0224 01:18:44.767527    4124 log.go:172] (0xc000b88e70) (0xc000968000) Stream removed, broadcasting: 5\n"
Feb 24 01:18:44.776: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 24 01:18:44.776: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 24 01:18:44.784: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 24 01:18:54.792: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 24 01:18:54.792: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 01:18:54.819: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999516s
Feb 24 01:18:55.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.983844789s
Feb 24 01:18:56.836: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.976184591s
Feb 24 01:18:57.842: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.968389076s
Feb 24 01:18:58.849: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.961266173s
Feb 24 01:18:59.857: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.954359422s
Feb 24 01:19:00.868: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.94655782s
Feb 24 01:19:01.882: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.9360546s
Feb 24 01:19:02.889: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.921943552s
Feb 24 01:19:03.900: INFO: Verifying statefulset ss doesn't scale past 1 for another 914.937202ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6636
Feb 24 01:19:05.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6636 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 24 01:19:06.086: INFO: stderr: "I0224 01:19:05.880509    4140 log.go:172] (0xc000a526e0) (0xc000a60000) Create stream\nI0224 01:19:05.880729    4140 log.go:172] (0xc000a526e0) (0xc000a60000) Stream added, broadcasting: 1\nI0224 01:19:05.887125    4140 log.go:172] (0xc000a526e0) Reply frame received for 1\nI0224 01:19:05.887244    4140 log.go:172] (0xc000a526e0) (0xc0005bbb80) Create stream\nI0224 01:19:05.887261    4140 log.go:172] (0xc000a526e0) (0xc0005bbb80) Stream added, broadcasting: 3\nI0224 01:19:05.891075    4140 log.go:172] (0xc000a526e0) Reply frame received for 3\nI0224 01:19:05.891276    4140 log.go:172] (0xc000a526e0) (0xc000aea000) Create stream\nI0224 01:19:05.891316    4140 log.go:172] (0xc000a526e0) (0xc000aea000) Stream added, broadcasting: 5\nI0224 01:19:05.897978    4140 log.go:172] (0xc000a526e0) Reply frame received for 5\nI0224 01:19:05.998465    4140 log.go:172] (0xc000a526e0) Data frame received for 3\nI0224 01:19:05.998536    4140 log.go:172] (0xc0005bbb80) (3) Data frame handling\nI0224 01:19:05.998584    4140 log.go:172] (0xc0005bbb80) (3) Data frame sent\nI0224 01:19:05.998622    4140 log.go:172] (0xc000a526e0) Data frame received for 5\nI0224 01:19:05.998652    4140 log.go:172] (0xc000aea000) (5) Data frame handling\nI0224 01:19:05.998687    4140 log.go:172] (0xc000aea000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0224 01:19:06.071438    4140 log.go:172] (0xc000a526e0) (0xc0005bbb80) Stream removed, broadcasting: 3\nI0224 01:19:06.071713    4140 log.go:172] (0xc000a526e0) Data frame received for 1\nI0224 01:19:06.071729    4140 log.go:172] (0xc000a60000) (1) Data frame handling\nI0224 01:19:06.071738    4140 log.go:172] (0xc000a60000) (1) Data frame sent\nI0224 01:19:06.071775    4140 log.go:172] (0xc000a526e0) (0xc000aea000) Stream removed, broadcasting: 5\nI0224 01:19:06.071801    4140 log.go:172] (0xc000a526e0) (0xc000a60000) Stream removed, broadcasting: 1\nI0224 01:19:06.072249    4140 log.go:172] (0xc000a526e0) (0xc000a60000) Stream removed, broadcasting: 1\nI0224 01:19:06.072263    4140 log.go:172] (0xc000a526e0) (0xc0005bbb80) Stream removed, broadcasting: 3\nI0224 01:19:06.072275    4140 log.go:172] (0xc000a526e0) (0xc000aea000) Stream removed, broadcasting: 5\nI0224 01:19:06.072455    4140 log.go:172] (0xc000a526e0) Go away received\n"
Feb 24 01:19:06.086: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 24 01:19:06.086: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 24 01:19:06.091: INFO: Found 1 stateful pods, waiting for 3
Feb 24 01:19:16.167: INFO: Found 2 stateful pods, waiting for 3
Feb 24 01:19:26.100: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 01:19:26.100: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 24 01:19:26.100: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 24 01:19:26.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6636 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 24 01:19:28.832: INFO: stderr: "I0224 01:19:28.665177    4160 log.go:172] (0xc0003c6a50) (0xc000701ea0) Create stream\nI0224 01:19:28.665267    4160 log.go:172] (0xc0003c6a50) (0xc000701ea0) Stream added, broadcasting: 1\nI0224 01:19:28.673032    4160 log.go:172] (0xc0003c6a50) Reply frame received for 1\nI0224 01:19:28.673108    4160 log.go:172] (0xc0003c6a50) (0xc000690780) Create stream\nI0224 01:19:28.673123    4160 log.go:172] (0xc0003c6a50) (0xc000690780) Stream added, broadcasting: 3\nI0224 01:19:28.674634    4160 log.go:172] (0xc0003c6a50) Reply frame received for 3\nI0224 01:19:28.674793    4160 log.go:172] (0xc0003c6a50) (0xc000743400) Create stream\nI0224 01:19:28.674832    4160 log.go:172] (0xc0003c6a50) (0xc000743400) Stream added, broadcasting: 5\nI0224 01:19:28.678488    4160 log.go:172] (0xc0003c6a50) Reply frame received for 5\nI0224 01:19:28.757377    4160 log.go:172] (0xc0003c6a50) Data frame received for 3\nI0224 01:19:28.757574    4160 log.go:172] (0xc000690780) (3) Data frame handling\nI0224 01:19:28.757610    4160 log.go:172] (0xc000690780) (3) Data frame sent\nI0224 01:19:28.757698    4160 log.go:172] (0xc0003c6a50) Data frame received for 5\nI0224 01:19:28.757727    4160 log.go:172] (0xc000743400) (5) Data frame handling\nI0224 01:19:28.757767    4160 log.go:172] (0xc000743400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 01:19:28.820345    4160 log.go:172] (0xc0003c6a50) (0xc000690780) Stream removed, broadcasting: 3\nI0224 01:19:28.821031    4160 log.go:172] (0xc0003c6a50) Data frame received for 1\nI0224 01:19:28.821176    4160 log.go:172] (0xc000701ea0) (1) Data frame handling\nI0224 01:19:28.821242    4160 log.go:172] (0xc000701ea0) (1) Data frame sent\nI0224 01:19:28.821911    4160 log.go:172] (0xc0003c6a50) (0xc000701ea0) Stream removed, broadcasting: 1\nI0224 01:19:28.822363    4160 log.go:172] (0xc0003c6a50) (0xc000743400) Stream removed, broadcasting: 5\nI0224 01:19:28.822442    4160 log.go:172] (0xc0003c6a50) Go away received\nI0224 01:19:28.823711    4160 log.go:172] (0xc0003c6a50) (0xc000701ea0) Stream removed, broadcasting: 1\nI0224 01:19:28.823759    4160 log.go:172] (0xc0003c6a50) (0xc000690780) Stream removed, broadcasting: 3\nI0224 01:19:28.823774    4160 log.go:172] (0xc0003c6a50) (0xc000743400) Stream removed, broadcasting: 5\n"
Feb 24 01:19:28.832: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 24 01:19:28.832: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 24 01:19:28.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6636 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 24 01:19:29.277: INFO: stderr: "I0224 01:19:28.969843    4192 log.go:172] (0xc000ad2210) (0xc000a903c0) Create stream\nI0224 01:19:28.970180    4192 log.go:172] (0xc000ad2210) (0xc000a903c0) Stream added, broadcasting: 1\nI0224 01:19:28.975559    4192 log.go:172] (0xc000ad2210) Reply frame received for 1\nI0224 01:19:28.975762    4192 log.go:172] (0xc000ad2210) (0xc0009f01e0) Create stream\nI0224 01:19:28.975780    4192 log.go:172] (0xc000ad2210) (0xc0009f01e0) Stream added, broadcasting: 3\nI0224 01:19:28.977320    4192 log.go:172] (0xc000ad2210) Reply frame received for 3\nI0224 01:19:28.977432    4192 log.go:172] (0xc000ad2210) (0xc00098a000) Create stream\nI0224 01:19:28.977471    4192 log.go:172] (0xc000ad2210) (0xc00098a000) Stream added, broadcasting: 5\nI0224 01:19:28.982297    4192 log.go:172] (0xc000ad2210) Reply frame received for 5\nI0224 01:19:29.082876    4192 log.go:172] (0xc000ad2210) Data frame received for 5\nI0224 01:19:29.082946    4192 log.go:172] (0xc00098a000) (5) Data frame handling\nI0224 01:19:29.082963    4192 log.go:172] (0xc00098a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 01:19:29.149738    4192 log.go:172] (0xc000ad2210) Data frame received for 3\nI0224 01:19:29.149852    4192 log.go:172] (0xc0009f01e0) (3) Data frame handling\nI0224 01:19:29.149882    4192 log.go:172] (0xc0009f01e0) (3) Data frame sent\nI0224 01:19:29.265806    4192 log.go:172] (0xc000ad2210) Data frame received for 1\nI0224 01:19:29.265857    4192 log.go:172] (0xc000ad2210) (0xc0009f01e0) Stream removed, broadcasting: 3\nI0224 01:19:29.265894    4192 log.go:172] (0xc000a903c0) (1) Data frame handling\nI0224 01:19:29.265908    4192 log.go:172] (0xc000a903c0) (1) Data frame sent\nI0224 01:19:29.265921    4192 log.go:172] (0xc000ad2210) (0xc000a903c0) Stream removed, broadcasting: 1\nI0224 01:19:29.266011    4192 log.go:172] (0xc000ad2210) (0xc00098a000) Stream removed, broadcasting: 5\nI0224 01:19:29.266509    4192 log.go:172] (0xc000ad2210) (0xc000a903c0) Stream removed, broadcasting: 1\nI0224 01:19:29.266528    4192 log.go:172] (0xc000ad2210) (0xc0009f01e0) Stream removed, broadcasting: 3\nI0224 01:19:29.266537    4192 log.go:172] (0xc000ad2210) (0xc00098a000) Stream removed, broadcasting: 5\n"
Feb 24 01:19:29.277: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 24 01:19:29.277: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 24 01:19:29.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6636 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 24 01:19:29.735: INFO: stderr: "I0224 01:19:29.497689    4212 log.go:172] (0xc00050d4a0) (0xc000651e00) Create stream\nI0224 01:19:29.497890    4212 log.go:172] (0xc00050d4a0) (0xc000651e00) Stream added, broadcasting: 1\nI0224 01:19:29.500789    4212 log.go:172] (0xc00050d4a0) Reply frame received for 1\nI0224 01:19:29.500879    4212 log.go:172] (0xc00050d4a0) (0xc0003097c0) Create stream\nI0224 01:19:29.500894    4212 log.go:172] (0xc00050d4a0) (0xc0003097c0) Stream added, broadcasting: 3\nI0224 01:19:29.502373    4212 log.go:172] (0xc00050d4a0) Reply frame received for 3\nI0224 01:19:29.502410    4212 log.go:172] (0xc00050d4a0) (0xc000309860) Create stream\nI0224 01:19:29.502429    4212 log.go:172] (0xc00050d4a0) (0xc000309860) Stream added, broadcasting: 5\nI0224 01:19:29.505006    4212 log.go:172] (0xc00050d4a0) Reply frame received for 5\nI0224 01:19:29.585603    4212 log.go:172] (0xc00050d4a0) Data frame received for 5\nI0224 01:19:29.585651    4212 log.go:172] (0xc000309860) (5) Data frame handling\nI0224 01:19:29.585683    4212 log.go:172] (0xc000309860) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0224 01:19:29.645567    4212 log.go:172] (0xc00050d4a0) Data frame received for 3\nI0224 01:19:29.646239    4212 log.go:172] (0xc0003097c0) (3) Data frame handling\nI0224 01:19:29.646339    4212 log.go:172] (0xc0003097c0) (3) Data frame sent\nI0224 01:19:29.724974    4212 log.go:172] (0xc00050d4a0) (0xc0003097c0) Stream removed, broadcasting: 3\nI0224 01:19:29.725112    4212 log.go:172] (0xc00050d4a0) Data frame received for 1\nI0224 01:19:29.725156    4212 log.go:172] (0xc00050d4a0) (0xc000309860) Stream removed, broadcasting: 5\nI0224 01:19:29.725257    4212 log.go:172] (0xc000651e00) (1) Data frame handling\nI0224 01:19:29.725299    4212 log.go:172] (0xc000651e00) (1) Data frame sent\nI0224 01:19:29.725318    4212 log.go:172] (0xc00050d4a0) (0xc000651e00) Stream removed, broadcasting: 1\nI0224 01:19:29.725344    4212 log.go:172] (0xc00050d4a0) Go away received\nI0224 01:19:29.726411    4212 log.go:172] (0xc00050d4a0) (0xc000651e00) Stream removed, broadcasting: 1\nI0224 01:19:29.726435    4212 log.go:172] (0xc00050d4a0) (0xc0003097c0) Stream removed, broadcasting: 3\nI0224 01:19:29.726448    4212 log.go:172] (0xc00050d4a0) (0xc000309860) Stream removed, broadcasting: 5\n"
Feb 24 01:19:29.735: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 24 01:19:29.735: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 24 01:19:29.735: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 01:19:29.782: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 24 01:19:39.817: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 24 01:19:39.817: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 24 01:19:39.817: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 24 01:19:39.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999695s
Feb 24 01:19:40.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982323652s
Feb 24 01:19:41.900: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.967182089s
Feb 24 01:19:42.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.93130758s
Feb 24 01:19:43.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.922519244s
Feb 24 01:19:44.927: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.9134932s
Feb 24 01:19:45.935: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.903866661s
Feb 24 01:19:46.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.895202522s
Feb 24 01:19:47.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.884160852s
Feb 24 01:19:48.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 869.360646ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6636
Feb 24 01:19:49.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6636 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 24 01:19:50.428: INFO: stderr: "I0224 01:19:50.158608    4234 log.go:172] (0xc000908a50) (0xc0005bc8c0) Create stream\nI0224 01:19:50.158703    4234 log.go:172] (0xc000908a50) (0xc0005bc8c0) Stream added, broadcasting: 1\nI0224 01:19:50.161747    4234 log.go:172] (0xc000908a50) Reply frame received for 1\nI0224 01:19:50.161815    4234 log.go:172] (0xc000908a50) (0xc000758780) Create stream\nI0224 01:19:50.161828    4234 log.go:172] (0xc000908a50) (0xc000758780) Stream added, broadcasting: 3\nI0224 01:19:50.163740    4234 log.go:172] (0xc000908a50) Reply frame received for 3\nI0224 01:19:50.163789    4234 log.go:172] (0xc000908a50) (0xc000758820) Create stream\nI0224 01:19:50.163813    4234 log.go:172] (0xc000908a50) (0xc000758820) Stream added, broadcasting: 5\nI0224 01:19:50.165166    4234 log.go:172] (0xc000908a50) Reply frame received for 5\nI0224 01:19:50.273456    4234 log.go:172] (0xc000908a50) Data frame received for 3\nI0224 01:19:50.273617    4234 log.go:172] (0xc000758780) (3) Data frame handling\nI0224 01:19:50.273646    4234 log.go:172] (0xc000758780) (3) Data frame sent\nI0224 01:19:50.273719    4234 log.go:172] (0xc000908a50) Data frame received for 5\nI0224 01:19:50.273741    4234 log.go:172] (0xc000758820) (5) Data frame handling\nI0224 01:19:50.273755    4234 log.go:172] (0xc000758820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0224 01:19:50.416400    4234 log.go:172] (0xc000908a50) Data frame received for 1\nI0224 01:19:50.416458    4234 log.go:172] (0xc0005bc8c0) (1) Data frame handling\nI0224 01:19:50.416477    4234 log.go:172] (0xc0005bc8c0) (1) Data frame sent\nI0224 01:19:50.416692    4234 log.go:172] (0xc000908a50) (0xc0005bc8c0) Stream removed, broadcasting: 1\nI0224 01:19:50.416864    4234 log.go:172] (0xc000908a50) (0xc000758780) Stream removed, broadcasting: 3\nI0224 01:19:50.416943    4234 log.go:172] (0xc000908a50) (0xc000758820) Stream removed, broadcasting: 5\nI0224 01:19:50.417012    4234 log.go:172] (0xc000908a50) Go away received\nI0224 01:19:50.417587    4234 log.go:172] (0xc000908a50) (0xc0005bc8c0) Stream removed, broadcasting: 1\nI0224 01:19:50.417601    4234 log.go:172] (0xc000908a50) (0xc000758780) Stream removed, broadcasting: 3\nI0224 01:19:50.417607    4234 log.go:172] (0xc000908a50) (0xc000758820) Stream removed, broadcasting: 5\n"
Feb 24 01:19:50.428: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 24 01:19:50.428: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 24 01:19:50.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6636 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 24 01:19:50.804: INFO: stderr: "I0224 01:19:50.619477    4256 log.go:172] (0xc000a5c2c0) (0xc000aa6280) Create stream\nI0224 01:19:50.619592    4256 log.go:172] (0xc000a5c2c0) (0xc000aa6280) Stream added, broadcasting: 1\nI0224 01:19:50.621935    4256 log.go:172] (0xc000a5c2c0) Reply frame received for 1\nI0224 01:19:50.621989    4256 log.go:172] (0xc000a5c2c0) (0xc000aa6320) Create stream\nI0224 01:19:50.621997    4256 log.go:172] (0xc000a5c2c0) (0xc000aa6320) Stream added, broadcasting: 3\nI0224 01:19:50.623419    4256 log.go:172] (0xc000a5c2c0) Reply frame received for 3\nI0224 01:19:50.623438    4256 log.go:172] (0xc000a5c2c0) (0xc000aa63c0) Create stream\nI0224 01:19:50.623442    4256 log.go:172] (0xc000a5c2c0) (0xc000aa63c0) Stream added, broadcasting: 5\nI0224 01:19:50.624428    4256 log.go:172] (0xc000a5c2c0) Reply frame received for 5\nI0224 01:19:50.706743    4256 log.go:172] (0xc000a5c2c0) Data frame received for 5\nI0224 01:19:50.706856    4256 log.go:172] (0xc000aa63c0) (5) Data frame handling\nI0224 01:19:50.706877    4256 log.go:172] (0xc000aa63c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0224 01:19:50.707214    4256 log.go:172] (0xc000a5c2c0) Data frame received for 3\nI0224 01:19:50.707226    4256 log.go:172] (0xc000aa6320) (3) Data frame handling\nI0224 01:19:50.707234    4256 log.go:172] (0xc000aa6320) (3) Data frame sent\nI0224 01:19:50.786876    4256 log.go:172] (0xc000a5c2c0) Data frame received for 1\nI0224 01:19:50.787047    4256 log.go:172] (0xc000a5c2c0) (0xc000aa63c0) Stream removed, broadcasting: 5\nI0224 01:19:50.787133    4256 log.go:172] (0xc000aa6280) (1) Data frame handling\nI0224 01:19:50.787148    4256 log.go:172] (0xc000aa6280) (1) Data frame sent\nI0224 01:19:50.787220    4256 log.go:172] (0xc000a5c2c0) (0xc000aa6320) Stream removed, broadcasting: 3\nI0224 01:19:50.788084    4256 log.go:172] (0xc000a5c2c0) (0xc000aa6280) Stream removed, broadcasting: 1\nI0224 01:19:50.791089    4256 log.go:172] (0xc000a5c2c0) (0xc000aa6280) Stream removed, broadcasting: 1\nI0224 01:19:50.791173    4256 log.go:172] (0xc000a5c2c0) (0xc000aa6320) Stream removed, broadcasting: 3\nI0224 01:19:50.791206    4256 log.go:172] (0xc000a5c2c0) (0xc000aa63c0) Stream removed, broadcasting: 5\n"
Feb 24 01:19:50.804: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 24 01:19:50.804: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 24 01:19:50.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6636 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 24 01:19:51.277: INFO: stderr: "I0224 01:19:51.095958    4277 log.go:172] (0xc0000f5a20) (0xc0009321e0) Create stream\nI0224 01:19:51.096025    4277 log.go:172] (0xc0000f5a20) (0xc0009321e0) Stream added, broadcasting: 1\nI0224 01:19:51.097940    4277 log.go:172] (0xc0000f5a20) Reply frame received for 1\nI0224 01:19:51.097976    4277 log.go:172] (0xc0000f5a20) (0xc0006f4280) Create stream\nI0224 01:19:51.097983    4277 log.go:172] (0xc0000f5a20) (0xc0006f4280) Stream added, broadcasting: 3\nI0224 01:19:51.099038    4277 log.go:172] (0xc0000f5a20) Reply frame received for 3\nI0224 01:19:51.099062    4277 log.go:172] (0xc0000f5a20) (0xc0006f6000) Create stream\nI0224 01:19:51.099083    4277 log.go:172] (0xc0000f5a20) (0xc0006f6000) Stream added, broadcasting: 5\nI0224 01:19:51.101174    4277 log.go:172] (0xc0000f5a20) Reply frame received for 5\nI0224 01:19:51.180703    4277 log.go:172] (0xc0000f5a20) Data frame received for 5\nI0224 01:19:51.180961    4277 log.go:172] (0xc0006f6000) (5) Data frame handling\nI0224 01:19:51.181022    4277 log.go:172] (0xc0006f6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0224 01:19:51.181100    4277 log.go:172] (0xc0000f5a20) Data frame received for 3\nI0224 01:19:51.181119    4277 log.go:172] (0xc0006f4280) (3) Data frame handling\nI0224 01:19:51.181158    4277 log.go:172] (0xc0006f4280) (3) Data frame sent\nI0224 01:19:51.269979    4277 log.go:172] (0xc0000f5a20) (0xc0006f6000) Stream removed, broadcasting: 5\nI0224 01:19:51.270048    4277 log.go:172] (0xc0000f5a20) Data frame received for 1\nI0224 01:19:51.270064    4277 log.go:172] (0xc0000f5a20) (0xc0006f4280) Stream removed, broadcasting: 3\nI0224 01:19:51.270110    4277 log.go:172] (0xc0009321e0) (1) Data frame handling\nI0224 01:19:51.270123    4277 log.go:172] (0xc0009321e0) (1) Data frame sent\nI0224 01:19:51.270131    4277 log.go:172] (0xc0000f5a20) (0xc0009321e0) Stream removed, broadcasting: 1\nI0224 01:19:51.270139    4277 log.go:172] (0xc0000f5a20) Go away received\nI0224 01:19:51.270748    4277 log.go:172] (0xc0000f5a20) (0xc0009321e0) Stream removed, broadcasting: 1\nI0224 01:19:51.270760    4277 log.go:172] (0xc0000f5a20) (0xc0006f4280) Stream removed, broadcasting: 3\nI0224 01:19:51.270766    4277 log.go:172] (0xc0000f5a20) (0xc0006f6000) Stream removed, broadcasting: 5\n"
Feb 24 01:19:51.277: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 24 01:19:51.277: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 24 01:19:51.277: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 24 01:20:21.312: INFO: Deleting all statefulset in ns statefulset-6636
Feb 24 01:20:21.316: INFO: Scaling statefulset ss to 0
Feb 24 01:20:21.331: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 01:20:21.335: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:20:21.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6636" for this suite.

• [SLOW TEST:107.305 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":220,"skipped":3652,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:20:21.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 24 01:20:21.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7898'
Feb 24 01:20:21.686: INFO: stderr: ""
Feb 24 01:20:21.687: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868
Feb 24 01:20:21.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7898'
Feb 24 01:20:25.410: INFO: stderr: ""
Feb 24 01:20:25.410: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:20:25.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7898" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":280,"completed":221,"skipped":3656,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:20:25.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3759
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3759
STEP: Creating statefulset with conflicting port in namespace statefulset-3759
STEP: Waiting until pod test-pod will start running in namespace statefulset-3759
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3759
Feb 24 01:20:35.690: INFO: Observed stateful pod in namespace: statefulset-3759, name: ss-0, uid: 7c44430e-4aa5-4127-a525-9aa13224f718, status phase: Pending. Waiting for statefulset controller to delete.
Feb 24 01:20:42.311: INFO: Observed stateful pod in namespace: statefulset-3759, name: ss-0, uid: 7c44430e-4aa5-4127-a525-9aa13224f718, status phase: Failed. Waiting for statefulset controller to delete.
Feb 24 01:20:42.320: INFO: Observed stateful pod in namespace: statefulset-3759, name: ss-0, uid: 7c44430e-4aa5-4127-a525-9aa13224f718, status phase: Failed. Waiting for statefulset controller to delete.
Feb 24 01:20:42.346: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3759
STEP: Removing pod with conflicting port in namespace statefulset-3759
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3759 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 24 01:20:58.539: INFO: Deleting all statefulset in ns statefulset-3759
Feb 24 01:20:58.545: INFO: Scaling statefulset ss to 0
Feb 24 01:21:19.075: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 01:21:19.079: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:21:19.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3759" for this suite.

• [SLOW TEST:53.713 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":222,"skipped":3658,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:21:19.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 24 01:21:26.656: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:21:26.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3399" for this suite.

• [SLOW TEST:7.608 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":223,"skipped":3691,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:21:26.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:21:26.894: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 24 01:21:26.920: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 24 01:21:32.016: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 24 01:21:34.042: INFO: Creating deployment "test-rolling-update-deployment"
Feb 24 01:21:34.047: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 24 01:21:34.061: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 24 01:21:36.070: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 24 01:21:36.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:21:38.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:21:40.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104100, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:21:42.080: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 24 01:21:42.092: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-9636 /apis/apps/v1/namespaces/deployment-9636/deployments/test-rolling-update-deployment 8d684c9f-a4b9-4c62-9d11-97e98f117a03 10341746 1 2020-02-24 01:21:34 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001a6e758  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-24 01:21:34 +0000 UTC,LastTransitionTime:2020-02-24 01:21:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-24 01:21:40 +0000 UTC,LastTransitionTime:2020-02-24 01:21:34 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 24 01:21:42.096: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-9636 /apis/apps/v1/namespaces/deployment-9636/replicasets/test-rolling-update-deployment-67cf4f6444 95bff874-2427-49cb-bc5f-85d21ee5985b 10341735 1 2020-02-24 01:21:34 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 8d684c9f-a4b9-4c62-9d11-97e98f117a03 0xc00113c2d7 0xc00113c2d8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00113c348  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 24 01:21:42.096: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 24 01:21:42.097: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-9636 /apis/apps/v1/namespaces/deployment-9636/replicasets/test-rolling-update-controller ecb1f65c-f97f-4fa7-8e72-2847ae37674b 10341744 2 2020-02-24 01:21:26 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 8d684c9f-a4b9-4c62-9d11-97e98f117a03 0xc00113c1e7 0xc00113c1e8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00113c258  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 24 01:21:42.100: INFO: Pod "test-rolling-update-deployment-67cf4f6444-jnqfg" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-jnqfg test-rolling-update-deployment-67cf4f6444- deployment-9636 /api/v1/namespaces/deployment-9636/pods/test-rolling-update-deployment-67cf4f6444-jnqfg 917da184-911d-4f48-aec2-f089d06dfb16 10341734 0 2020-02-24 01:21:34 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 95bff874-2427-49cb-bc5f-85d21ee5985b 0xc001a6ee27 0xc001a6ee28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j7vfj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j7vfj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j7vfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:21:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:21:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:21:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:21:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-24 01:21:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-24 01:21:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://ec283d129fbff212d576de13509315724e9c1d5a6599e8da7cdf2fb4e63b9d7e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:21:42.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9636" for this suite.

• [SLOW TEST:15.338 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":224,"skipped":3704,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:21:42.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-4cce6fe1-5127-4f10-a6d0-65a21a73c0b9
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:21:42.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7559" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":225,"skipped":3721,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:21:42.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Feb 24 01:21:42.474: INFO: >>> kubeConfig: /root/.kube/config
Feb 24 01:21:46.024: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:21:57.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5455" for this suite.

• [SLOW TEST:14.761 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":226,"skipped":3732,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:21:57.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9598
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating statefulset ss in namespace statefulset-9598
Feb 24 01:21:57.252: INFO: Found 0 stateful pods, waiting for 1
Feb 24 01:22:07.260: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 24 01:22:07.304: INFO: Deleting all statefulset in ns statefulset-9598
Feb 24 01:22:07.317: INFO: Scaling statefulset ss to 0
Feb 24 01:22:27.446: INFO: Waiting for statefulset status.replicas updated to 0
Feb 24 01:22:27.453: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:22:27.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9598" for this suite.

• [SLOW TEST:30.497 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":227,"skipped":3747,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:22:27.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:22:27.635: INFO: Creating deployment "test-recreate-deployment"
Feb 24 01:22:27.652: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 24 01:22:27.672: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 24 01:22:29.690: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 24 01:22:29.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:22:31.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:22:33.714: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104147, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:22:35.700: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 24 01:22:35.715: INFO: Updating deployment test-recreate-deployment
Feb 24 01:22:35.715: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 24 01:22:36.158: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-280 /apis/apps/v1/namespaces/deployment-280/deployments/test-recreate-deployment 50191428-1d37-4e08-8f1e-eb684f11b612 10342052 2 2020-02-24 01:22:27 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001915af8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-24 01:22:35 +0000 UTC,LastTransitionTime:2020-02-24 01:22:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-24 01:22:36 +0000 UTC,LastTransitionTime:2020-02-24 01:22:27 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb 24 01:22:36.201: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-280 /apis/apps/v1/namespaces/deployment-280/replicasets/test-recreate-deployment-5f94c574ff 9ff3891f-27af-48fb-93c3-93c0dcea3511 10342051 1 2020-02-24 01:22:35 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 50191428-1d37-4e08-8f1e-eb684f11b612 0xc00418e527 0xc00418e528}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00418e5a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 24 01:22:36.201: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 24 01:22:36.201: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-280 /apis/apps/v1/namespaces/deployment-280/replicasets/test-recreate-deployment-799c574856 3430f748-3a52-4251-bdef-6ac743f383a1 10342041 2 2020-02-24 01:22:27 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 50191428-1d37-4e08-8f1e-eb684f11b612 0xc00418e647 0xc00418e648}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00418e6d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 24 01:22:36.209: INFO: Pod "test-recreate-deployment-5f94c574ff-85k5h" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-85k5h test-recreate-deployment-5f94c574ff- deployment-280 /api/v1/namespaces/deployment-280/pods/test-recreate-deployment-5f94c574ff-85k5h e4fdcc4d-68bd-41b3-8e52-34108afd937b 10342053 0 2020-02-24 01:22:35 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 9ff3891f-27af-48fb-93c3-93c0dcea3511 0xc000838bb7 0xc000838bb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ftzh8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ftzh8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ftzh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:22:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:22:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:22:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-24 01:22:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-24 01:22:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:22:36.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-280" for this suite.

• [SLOW TEST:8.696 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":228,"skipped":3752,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:22:36.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 01:22:37.153: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 01:22:39.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:22:41.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:22:43.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:22:45.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:22:47.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104157, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 01:22:50.191: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:22:50.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9855" for this suite.
STEP: Destroying namespace "webhook-9855-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:15.031 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":229,"skipped":3785,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:22:51.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0224 01:23:06.667925      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 24 01:23:06.668: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:23:06.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9580" for this suite.

• [SLOW TEST:17.992 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":230,"skipped":3796,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:23:09.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:23:32.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-992" for this suite.

• [SLOW TEST:22.944 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":231,"skipped":3799,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:23:32.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-85032bfe-00d7-419d-a32d-966f5cc03305
STEP: Creating a pod to test consume configMaps
Feb 24 01:23:32.407: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143" in namespace "configmap-5025" to be "success or failure"
Feb 24 01:23:32.421: INFO: Pod "pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143": Phase="Pending", Reason="", readiness=false. Elapsed: 13.602094ms
Feb 24 01:23:34.434: INFO: Pod "pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02604187s
Feb 24 01:23:36.471: INFO: Pod "pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063267385s
Feb 24 01:23:38.487: INFO: Pod "pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079682226s
Feb 24 01:23:40.513: INFO: Pod "pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105556144s
Feb 24 01:23:42.545: INFO: Pod "pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137702077s
STEP: Saw pod success
Feb 24 01:23:42.546: INFO: Pod "pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143" satisfied condition "success or failure"
Feb 24 01:23:42.551: INFO: Trying to get logs from node jerma-node pod pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143 container configmap-volume-test: 
STEP: delete the pod
Feb 24 01:23:42.682: INFO: Waiting for pod pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143 to disappear
Feb 24 01:23:42.702: INFO: Pod pod-configmaps-5e01f698-3d0a-4552-ba01-a42bd9477143 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:23:42.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5025" for this suite.

• [SLOW TEST:10.564 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":232,"skipped":3803,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:23:42.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:23:43.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1253" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":233,"skipped":3834,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:23:43.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 24 01:23:59.436: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 01:23:59.452: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 01:24:01.453: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 01:24:01.464: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 01:24:03.453: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 01:24:03.506: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 24 01:24:05.452: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 24 01:24:05.457: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:24:05.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-890" for this suite.

• [SLOW TEST:22.332 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":234,"skipped":3892,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:24:05.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:24:13.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3888" for this suite.

• [SLOW TEST:8.378 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":235,"skipped":3928,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:24:13.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-348f198f-284e-4bdf-972f-481be39391ae
STEP: Creating a pod to test consume configMaps
Feb 24 01:24:13.976: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a" in namespace "projected-6794" to be "success or failure"
Feb 24 01:24:14.034: INFO: Pod "pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.906644ms
Feb 24 01:24:16.039: INFO: Pod "pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062167905s
Feb 24 01:24:18.052: INFO: Pod "pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07557774s
Feb 24 01:24:20.058: INFO: Pod "pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080964111s
Feb 24 01:24:22.065: INFO: Pod "pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087888971s
Feb 24 01:24:24.071: INFO: Pod "pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093947775s
STEP: Saw pod success
Feb 24 01:24:24.071: INFO: Pod "pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a" satisfied condition "success or failure"
Feb 24 01:24:24.075: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a container projected-configmap-volume-test: 
STEP: delete the pod
Feb 24 01:24:24.201: INFO: Waiting for pod pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a to disappear
Feb 24 01:24:24.229: INFO: Pod pod-projected-configmaps-d82b409c-c80d-4121-801a-d8c2c6fe993a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:24:24.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6794" for this suite.

• [SLOW TEST:10.393 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":236,"skipped":3932,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:24:24.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 24 01:24:24.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4" in namespace "projected-3982" to be "success or failure"
Feb 24 01:24:24.439: INFO: Pod "downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4": Phase="Pending", Reason="", readiness=false. Elapsed: 56.366271ms
Feb 24 01:24:26.449: INFO: Pod "downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065854344s
Feb 24 01:24:28.457: INFO: Pod "downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074674025s
Feb 24 01:24:30.490: INFO: Pod "downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107492388s
Feb 24 01:24:32.497: INFO: Pod "downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.114563478s
STEP: Saw pod success
Feb 24 01:24:32.498: INFO: Pod "downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4" satisfied condition "success or failure"
Feb 24 01:24:32.502: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4 container client-container: 
STEP: delete the pod
Feb 24 01:24:32.601: INFO: Waiting for pod downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4 to disappear
Feb 24 01:24:32.614: INFO: Pod downwardapi-volume-bee7ba21-b238-438f-9d84-c8c1b711f5c4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:24:32.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3982" for this suite.

• [SLOW TEST:8.401 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":237,"skipped":3937,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:24:32.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:24:42.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6339" for this suite.

• [SLOW TEST:10.245 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":238,"skipped":3938,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:24:42.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override command
Feb 24 01:24:43.127: INFO: Waiting up to 5m0s for pod "client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459" in namespace "containers-6866" to be "success or failure"
Feb 24 01:24:43.265: INFO: Pod "client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459": Phase="Pending", Reason="", readiness=false. Elapsed: 137.321274ms
Feb 24 01:24:45.275: INFO: Pod "client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147452188s
Feb 24 01:24:47.281: INFO: Pod "client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153675231s
Feb 24 01:24:49.291: INFO: Pod "client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163704759s
Feb 24 01:24:51.298: INFO: Pod "client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170353122s
Feb 24 01:24:53.306: INFO: Pod "client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.177884536s
STEP: Saw pod success
Feb 24 01:24:53.306: INFO: Pod "client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459" satisfied condition "success or failure"
Feb 24 01:24:53.310: INFO: Trying to get logs from node jerma-node pod client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459 container test-container: 
STEP: delete the pod
Feb 24 01:24:54.113: INFO: Waiting for pod client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459 to disappear
Feb 24 01:24:54.123: INFO: Pod client-containers-cac0fd13-5cb7-4b86-ab80-71eb367d7459 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:24:54.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6866" for this suite.

• [SLOW TEST:11.261 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":239,"skipped":3943,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:24:54.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 01:24:55.136: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 01:24:57.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:24:59.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:25:01.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104295, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 01:25:04.207: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Feb 24 01:25:04.242: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:25:04.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2621" for this suite.
STEP: Destroying namespace "webhook-2621-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.363 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":240,"skipped":3943,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:25:04.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:25:04.628: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:25:10.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9336" for this suite.

• [SLOW TEST:6.356 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":280,"completed":241,"skipped":3943,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:25:10.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 24 01:25:19.643: INFO: Successfully updated pod "pod-update-7e014abc-9720-4773-a538-979472cb69d5"
STEP: verifying the updated pod is in kubernetes
Feb 24 01:25:19.655: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:25:19.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2491" for this suite.

• [SLOW TEST:8.813 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":242,"skipped":3946,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:25:19.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8861.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8861.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8861.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8861.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 24 01:25:37.914: INFO: DNS probes using dns-test-be742d03-fd67-4c65-abbb-c2fb3480bd16 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8861.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8861.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8861.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8861.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 24 01:25:52.085: INFO: File wheezy_udp@dns-test-service-3.dns-8861.svc.cluster.local from pod  dns-8861/dns-test-3396a215-9985-4ce4-af8b-89b120da4a1d contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 24 01:25:52.089: INFO: File jessie_udp@dns-test-service-3.dns-8861.svc.cluster.local from pod  dns-8861/dns-test-3396a215-9985-4ce4-af8b-89b120da4a1d contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 24 01:25:52.089: INFO: Lookups using dns-8861/dns-test-3396a215-9985-4ce4-af8b-89b120da4a1d failed for: [wheezy_udp@dns-test-service-3.dns-8861.svc.cluster.local jessie_udp@dns-test-service-3.dns-8861.svc.cluster.local]

Feb 24 01:25:57.099: INFO: File wheezy_udp@dns-test-service-3.dns-8861.svc.cluster.local from pod  dns-8861/dns-test-3396a215-9985-4ce4-af8b-89b120da4a1d contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 24 01:25:57.105: INFO: File jessie_udp@dns-test-service-3.dns-8861.svc.cluster.local from pod  dns-8861/dns-test-3396a215-9985-4ce4-af8b-89b120da4a1d contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 24 01:25:57.105: INFO: Lookups using dns-8861/dns-test-3396a215-9985-4ce4-af8b-89b120da4a1d failed for: [wheezy_udp@dns-test-service-3.dns-8861.svc.cluster.local jessie_udp@dns-test-service-3.dns-8861.svc.cluster.local]

Feb 24 01:26:02.105: INFO: DNS probes using dns-test-3396a215-9985-4ce4-af8b-89b120da4a1d succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8861.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8861.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8861.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8861.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 24 01:26:20.357: INFO: DNS probes using dns-test-367da1e4-1527-47ae-85ff-1d65c827323c succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:26:20.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8861" for this suite.

• [SLOW TEST:60.946 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":243,"skipped":3957,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:26:20.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-5146
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 24 01:26:20.772: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 24 01:26:20.916: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:26:23.411: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:26:24.923: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:26:28.036: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:26:29.466: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:26:30.955: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:26:32.979: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:26:34.925: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:26:36.924: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:26:38.921: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:26:40.923: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:26:42.924: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:26:44.925: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:26:46.925: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 24 01:26:46.938: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 24 01:26:48.950: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 24 01:26:57.043: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5146 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 01:26:57.043: INFO: >>> kubeConfig: /root/.kube/config
I0224 01:26:57.132348      10 log.go:172] (0xc0029000b0) (0xc001bdef00) Create stream
I0224 01:26:57.132803      10 log.go:172] (0xc0029000b0) (0xc001bdef00) Stream added, broadcasting: 1
I0224 01:26:57.141354      10 log.go:172] (0xc0029000b0) Reply frame received for 1
I0224 01:26:57.141605      10 log.go:172] (0xc0029000b0) (0xc0017d8280) Create stream
I0224 01:26:57.141642      10 log.go:172] (0xc0029000b0) (0xc0017d8280) Stream added, broadcasting: 3
I0224 01:26:57.144739      10 log.go:172] (0xc0029000b0) Reply frame received for 3
I0224 01:26:57.144795      10 log.go:172] (0xc0029000b0) (0xc0011570e0) Create stream
I0224 01:26:57.144816      10 log.go:172] (0xc0029000b0) (0xc0011570e0) Stream added, broadcasting: 5
I0224 01:26:57.148087      10 log.go:172] (0xc0029000b0) Reply frame received for 5
I0224 01:26:58.273542      10 log.go:172] (0xc0029000b0) Data frame received for 3
I0224 01:26:58.273827      10 log.go:172] (0xc0017d8280) (3) Data frame handling
I0224 01:26:58.273885      10 log.go:172] (0xc0017d8280) (3) Data frame sent
I0224 01:26:58.395849      10 log.go:172] (0xc0029000b0) (0xc0011570e0) Stream removed, broadcasting: 5
I0224 01:26:58.396055      10 log.go:172] (0xc0029000b0) Data frame received for 1
I0224 01:26:58.396067      10 log.go:172] (0xc001bdef00) (1) Data frame handling
I0224 01:26:58.396095      10 log.go:172] (0xc001bdef00) (1) Data frame sent
I0224 01:26:58.396332      10 log.go:172] (0xc0029000b0) (0xc001bdef00) Stream removed, broadcasting: 1
I0224 01:26:58.396597      10 log.go:172] (0xc0029000b0) (0xc0017d8280) Stream removed, broadcasting: 3
I0224 01:26:58.396636      10 log.go:172] (0xc0029000b0) Go away received
I0224 01:26:58.396736      10 log.go:172] (0xc0029000b0) (0xc001bdef00) Stream removed, broadcasting: 1
I0224 01:26:58.396775      10 log.go:172] (0xc0029000b0) (0xc0017d8280) Stream removed, broadcasting: 3
I0224 01:26:58.396790      10 log.go:172] (0xc0029000b0) (0xc0011570e0) Stream removed, broadcasting: 5
Feb 24 01:26:58.396: INFO: Found all expected endpoints: [netserver-0]
Feb 24 01:26:58.404: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5146 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 01:26:58.404: INFO: >>> kubeConfig: /root/.kube/config
I0224 01:26:58.462934      10 log.go:172] (0xc002a94840) (0xc0017d8d20) Create stream
I0224 01:26:58.463255      10 log.go:172] (0xc002a94840) (0xc0017d8d20) Stream added, broadcasting: 1
I0224 01:26:58.469141      10 log.go:172] (0xc002a94840) Reply frame received for 1
I0224 01:26:58.469218      10 log.go:172] (0xc002a94840) (0xc0025000a0) Create stream
I0224 01:26:58.469243      10 log.go:172] (0xc002a94840) (0xc0025000a0) Stream added, broadcasting: 3
I0224 01:26:58.471722      10 log.go:172] (0xc002a94840) Reply frame received for 3
I0224 01:26:58.471853      10 log.go:172] (0xc002a94840) (0xc001508460) Create stream
I0224 01:26:58.471867      10 log.go:172] (0xc002a94840) (0xc001508460) Stream added, broadcasting: 5
I0224 01:26:58.474606      10 log.go:172] (0xc002a94840) Reply frame received for 5
I0224 01:26:59.601621      10 log.go:172] (0xc002a94840) Data frame received for 3
I0224 01:26:59.601707      10 log.go:172] (0xc0025000a0) (3) Data frame handling
I0224 01:26:59.601742      10 log.go:172] (0xc0025000a0) (3) Data frame sent
I0224 01:26:59.714488      10 log.go:172] (0xc002a94840) (0xc001508460) Stream removed, broadcasting: 5
I0224 01:26:59.714871      10 log.go:172] (0xc002a94840) Data frame received for 1
I0224 01:26:59.714895      10 log.go:172] (0xc0017d8d20) (1) Data frame handling
I0224 01:26:59.714939      10 log.go:172] (0xc0017d8d20) (1) Data frame sent
I0224 01:26:59.715023      10 log.go:172] (0xc002a94840) (0xc0017d8d20) Stream removed, broadcasting: 1
I0224 01:26:59.715392      10 log.go:172] (0xc002a94840) (0xc0025000a0) Stream removed, broadcasting: 3
I0224 01:26:59.715420      10 log.go:172] (0xc002a94840) Go away received
I0224 01:26:59.716510      10 log.go:172] (0xc002a94840) (0xc0017d8d20) Stream removed, broadcasting: 1
I0224 01:26:59.716536      10 log.go:172] (0xc002a94840) (0xc0025000a0) Stream removed, broadcasting: 3
I0224 01:26:59.716546      10 log.go:172] (0xc002a94840) (0xc001508460) Stream removed, broadcasting: 5
Feb 24 01:26:59.716: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:26:59.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5146" for this suite.

• [SLOW TEST:39.109 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":244,"skipped":3966,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:26:59.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 24 01:26:59.804: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:27:18.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4984" for this suite.

• [SLOW TEST:19.133 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":245,"skipped":3973,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:27:18.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:27:19.101: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"34f49e29-bb2b-4a21-a207-df1514cfb3b0", Controller:(*bool)(0xc004a5488a), BlockOwnerDeletion:(*bool)(0xc004a5488b)}}
Feb 24 01:27:19.131: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3205726c-f772-4a43-8dfa-89e1eff8bd7a", Controller:(*bool)(0xc004a54a1a), BlockOwnerDeletion:(*bool)(0xc004a54a1b)}}
Feb 24 01:27:19.166: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9c42e24e-0717-4b8c-9bf2-75d2a8786127", Controller:(*bool)(0xc00402e872), BlockOwnerDeletion:(*bool)(0xc00402e873)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:27:24.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7294" for this suite.

• [SLOW TEST:5.374 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":246,"skipped":3994,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:27:24.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:28:03.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-549" for this suite.
STEP: Destroying namespace "nsdeletetest-5759" for this suite.
Feb 24 01:28:03.815: INFO: Namespace nsdeletetest-5759 was already deleted
STEP: Destroying namespace "nsdeletetest-3818" for this suite.

• [SLOW TEST:39.574 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":247,"skipped":4006,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:28:03.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:28:17.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3536" for this suite.

• [SLOW TEST:13.285 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":248,"skipped":4039,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:28:17.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:29:08.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5729" for this suite.

• [SLOW TEST:51.285 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":249,"skipped":4055,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:29:08.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:29:08.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8136'
Feb 24 01:29:09.063: INFO: stderr: ""
Feb 24 01:29:09.064: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb 24 01:29:09.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8136'
Feb 24 01:29:09.434: INFO: stderr: ""
Feb 24 01:29:09.434: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 24 01:29:10.445: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:29:10.445: INFO: Found 0 / 1
Feb 24 01:29:11.445: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:29:11.445: INFO: Found 0 / 1
Feb 24 01:29:12.450: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:29:12.450: INFO: Found 0 / 1
Feb 24 01:29:13.443: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:29:13.443: INFO: Found 0 / 1
Feb 24 01:29:14.660: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:29:14.660: INFO: Found 0 / 1
Feb 24 01:29:15.648: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:29:15.649: INFO: Found 0 / 1
Feb 24 01:29:16.445: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:29:16.445: INFO: Found 0 / 1
Feb 24 01:29:17.446: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:29:17.446: INFO: Found 1 / 1
Feb 24 01:29:17.446: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 24 01:29:17.452: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 24 01:29:17.452: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 24 01:29:17.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-5prpb --namespace=kubectl-8136'
Feb 24 01:29:17.606: INFO: stderr: ""
Feb 24 01:29:17.607: INFO: stdout: "Name:         agnhost-master-5prpb\nNamespace:    kubectl-8136\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Mon, 24 Feb 2020 01:29:09 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://9424f844daa50a8d952e3e03f2243d799407b1929fd8ba00312605a70ded1899\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 24 Feb 2020 01:29:15 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dc2l7 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-dc2l7:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-dc2l7\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-8136/agnhost-master-5prpb to jerma-node\n  Normal  Pulled     5s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    3s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    2s         kubelet, jerma-node  Started container agnhost-master\n"
Feb 24 01:29:17.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8136'
Feb 24 01:29:17.789: INFO: stderr: ""
Feb 24 01:29:17.790: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-8136\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: agnhost-master-5prpb\n"
Feb 24 01:29:17.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8136'
Feb 24 01:29:17.959: INFO: stderr: ""
Feb 24 01:29:17.959: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-8136\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.44.10\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 24 01:29:17.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb 24 01:29:18.096: INFO: stderr: ""
Feb 24 01:29:18.097: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Mon, 24 Feb 2020 01:29:08 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Mon, 24 Feb 2020 01:26:58 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 24 Feb 2020 01:26:58 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 24 Feb 2020 01:26:58 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 24 Feb 2020 01:26:58 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         50d\n  kubectl-8136                agnhost-master-5prpb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 24 01:29:18.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8136'
Feb 24 01:29:18.244: INFO: stderr: ""
Feb 24 01:29:18.245: INFO: stdout: "Name:         kubectl-8136\nLabels:       e2e-framework=kubectl\n              e2e-run=d929b4ed-2b14-4dc9-b87a-0057b8abf1a4\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:29:18.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8136" for this suite.

• [SLOW TEST:9.857 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":280,"completed":250,"skipped":4085,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:29:18.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 01:29:19.169: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 01:29:21.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:29:23.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:29:25.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:29:27.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104559, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 01:29:30.259: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:29:30.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:29:31.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-102" for this suite.
STEP: Destroying namespace "webhook-102-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.518 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":251,"skipped":4090,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:29:31.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8277.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8277.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 24 01:29:45.999: INFO: DNS probes using dns-8277/dns-test-f47d4fc9-2743-46c1-80eb-f914524e4fc5 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:29:46.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8277" for this suite.

• [SLOW TEST:14.708 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":252,"skipped":4117,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:29:46.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:30:02.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6165" for this suite.

• [SLOW TEST:16.527 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":253,"skipped":4135,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:30:03.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-7640
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 24 01:30:03.151: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 24 01:30:03.332: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:30:05.618: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:30:07.339: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:30:09.776: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:30:11.783: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:30:13.432: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:30:15.353: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:30:17.339: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:30:19.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:30:21.340: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:30:23.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:30:25.341: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:30:27.341: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 24 01:30:29.341: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 24 01:30:29.350: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 24 01:30:37.422: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-7640 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 01:30:37.422: INFO: >>> kubeConfig: /root/.kube/config
I0224 01:30:37.496507      10 log.go:172] (0xc002667600) (0xc001157b80) Create stream
I0224 01:30:37.496601      10 log.go:172] (0xc002667600) (0xc001157b80) Stream added, broadcasting: 1
I0224 01:30:37.500283      10 log.go:172] (0xc002667600) Reply frame received for 1
I0224 01:30:37.500333      10 log.go:172] (0xc002667600) (0xc001509680) Create stream
I0224 01:30:37.500351      10 log.go:172] (0xc002667600) (0xc001509680) Stream added, broadcasting: 3
I0224 01:30:37.502361      10 log.go:172] (0xc002667600) Reply frame received for 3
I0224 01:30:37.502412      10 log.go:172] (0xc002667600) (0xc0010d2b40) Create stream
I0224 01:30:37.502433      10 log.go:172] (0xc002667600) (0xc0010d2b40) Stream added, broadcasting: 5
I0224 01:30:37.504605      10 log.go:172] (0xc002667600) Reply frame received for 5
I0224 01:30:37.612538      10 log.go:172] (0xc002667600) Data frame received for 3
I0224 01:30:37.612603      10 log.go:172] (0xc001509680) (3) Data frame handling
I0224 01:30:37.612623      10 log.go:172] (0xc001509680) (3) Data frame sent
I0224 01:30:37.675860      10 log.go:172] (0xc002667600) (0xc001509680) Stream removed, broadcasting: 3
I0224 01:30:37.676147      10 log.go:172] (0xc002667600) Data frame received for 1
I0224 01:30:37.676174      10 log.go:172] (0xc001157b80) (1) Data frame handling
I0224 01:30:37.676192      10 log.go:172] (0xc001157b80) (1) Data frame sent
I0224 01:30:37.676235      10 log.go:172] (0xc002667600) (0xc0010d2b40) Stream removed, broadcasting: 5
I0224 01:30:37.676275      10 log.go:172] (0xc002667600) (0xc001157b80) Stream removed, broadcasting: 1
I0224 01:30:37.676346      10 log.go:172] (0xc002667600) Go away received
I0224 01:30:37.676412      10 log.go:172] (0xc002667600) (0xc001157b80) Stream removed, broadcasting: 1
I0224 01:30:37.676445      10 log.go:172] (0xc002667600) (0xc001509680) Stream removed, broadcasting: 3
I0224 01:30:37.676464      10 log.go:172] (0xc002667600) (0xc0010d2b40) Stream removed, broadcasting: 5
Feb 24 01:30:37.676: INFO: Waiting for responses: map[]
Feb 24 01:30:37.681: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-7640 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 24 01:30:37.681: INFO: >>> kubeConfig: /root/.kube/config
I0224 01:30:37.723234      10 log.go:172] (0xc0037582c0) (0xc001570460) Create stream
I0224 01:30:37.723386      10 log.go:172] (0xc0037582c0) (0xc001570460) Stream added, broadcasting: 1
I0224 01:30:37.727146      10 log.go:172] (0xc0037582c0) Reply frame received for 1
I0224 01:30:37.727173      10 log.go:172] (0xc0037582c0) (0xc0010d32c0) Create stream
I0224 01:30:37.727182      10 log.go:172] (0xc0037582c0) (0xc0010d32c0) Stream added, broadcasting: 3
I0224 01:30:37.728763      10 log.go:172] (0xc0037582c0) Reply frame received for 3
I0224 01:30:37.728788      10 log.go:172] (0xc0037582c0) (0xc000ba60a0) Create stream
I0224 01:30:37.728799      10 log.go:172] (0xc0037582c0) (0xc000ba60a0) Stream added, broadcasting: 5
I0224 01:30:37.730243      10 log.go:172] (0xc0037582c0) Reply frame received for 5
I0224 01:30:37.825060      10 log.go:172] (0xc0037582c0) Data frame received for 3
I0224 01:30:37.825149      10 log.go:172] (0xc0010d32c0) (3) Data frame handling
I0224 01:30:37.825171      10 log.go:172] (0xc0010d32c0) (3) Data frame sent
I0224 01:30:37.899383      10 log.go:172] (0xc0037582c0) Data frame received for 1
I0224 01:30:37.899509      10 log.go:172] (0xc001570460) (1) Data frame handling
I0224 01:30:37.899533      10 log.go:172] (0xc001570460) (1) Data frame sent
I0224 01:30:37.899549      10 log.go:172] (0xc0037582c0) (0xc001570460) Stream removed, broadcasting: 1
I0224 01:30:37.899828      10 log.go:172] (0xc0037582c0) (0xc0010d32c0) Stream removed, broadcasting: 3
I0224 01:30:37.900966      10 log.go:172] (0xc0037582c0) (0xc000ba60a0) Stream removed, broadcasting: 5
I0224 01:30:37.901100      10 log.go:172] (0xc0037582c0) (0xc001570460) Stream removed, broadcasting: 1
I0224 01:30:37.901118      10 log.go:172] (0xc0037582c0) (0xc0010d32c0) Stream removed, broadcasting: 3
I0224 01:30:37.901136      10 log.go:172] (0xc0037582c0) (0xc000ba60a0) Stream removed, broadcasting: 5
Feb 24 01:30:37.901: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:30:37.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0224 01:30:37.902747      10 log.go:172] (0xc0037582c0) Go away received
STEP: Destroying namespace "pod-network-test-7640" for this suite.

• [SLOW TEST:34.908 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":254,"skipped":4145,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:30:37.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:30:38.602: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 24 01:30:44.364: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:30:44.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7112" for this suite.

• [SLOW TEST:7.743 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":255,"skipped":4167,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:30:45.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:31:05.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6960" for this suite.

• [SLOW TEST:19.956 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":256,"skipped":4173,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:31:05.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:31:05.787: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.541985ms)
Feb 24 01:31:05.806: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.556669ms)
Feb 24 01:31:05.814: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.059404ms)
Feb 24 01:31:05.824: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.306402ms)
Feb 24 01:31:05.830: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.642118ms)
Feb 24 01:31:05.835: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.65158ms)
Feb 24 01:31:05.839: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.993939ms)
Feb 24 01:31:05.843: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.010712ms)
Feb 24 01:31:05.848: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.538396ms)
Feb 24 01:31:05.853: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.039789ms)
Feb 24 01:31:05.857: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.285845ms)
Feb 24 01:31:05.861: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.188719ms)
Feb 24 01:31:05.867: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.478657ms)
Feb 24 01:31:05.875: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.478092ms)
Feb 24 01:31:05.881: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.094815ms)
Feb 24 01:31:05.886: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.58353ms)
Feb 24 01:31:05.903: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.362892ms)
Feb 24 01:31:05.920: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.699664ms)
Feb 24 01:31:05.931: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.803566ms)
Feb 24 01:31:05.967: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.82898ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:31:05.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4452" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":280,"completed":257,"skipped":4179,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:31:06.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:31:06.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7230" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":258,"skipped":4198,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:31:06.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 24 01:31:06.458: INFO: Waiting up to 5m0s for pod "downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186" in namespace "downward-api-3857" to be "success or failure"
Feb 24 01:31:06.476: INFO: Pod "downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186": Phase="Pending", Reason="", readiness=false. Elapsed: 18.429065ms
Feb 24 01:31:08.488: INFO: Pod "downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029932835s
Feb 24 01:31:10.497: INFO: Pod "downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038961384s
Feb 24 01:31:13.354: INFO: Pod "downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186": Phase="Pending", Reason="", readiness=false. Elapsed: 6.896210061s
Feb 24 01:31:15.361: INFO: Pod "downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186": Phase="Pending", Reason="", readiness=false. Elapsed: 8.903046845s
Feb 24 01:31:17.366: INFO: Pod "downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.908208426s
STEP: Saw pod success
Feb 24 01:31:17.366: INFO: Pod "downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186" satisfied condition "success or failure"
Feb 24 01:31:17.368: INFO: Trying to get logs from node jerma-node pod downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186 container dapi-container: 
STEP: delete the pod
Feb 24 01:31:17.406: INFO: Waiting for pod downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186 to disappear
Feb 24 01:31:17.445: INFO: Pod downward-api-6ec0cd59-fb2a-46b9-b746-f204169ee186 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:31:17.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3857" for this suite.

• [SLOW TEST:11.143 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":259,"skipped":4200,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:31:17.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-148.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-148.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 24 01:31:29.999: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:30.004: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:30.008: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:30.012: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:30.022: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:30.025: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:30.030: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:30.035: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:30.041: INFO: Lookups using dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local]

Feb 24 01:31:35.055: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:35.063: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:35.079: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:35.085: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:35.107: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:35.117: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:35.121: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:35.124: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:35.132: INFO: Lookups using dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local]

Feb 24 01:31:40.056: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:40.064: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:40.068: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:40.072: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:40.086: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:40.093: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:40.106: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:40.111: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:40.123: INFO: Lookups using dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local]

Feb 24 01:31:45.051: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:45.087: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:45.141: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:45.185: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:45.243: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:45.248: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:45.252: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:45.256: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:45.262: INFO: Lookups using dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local]

Feb 24 01:31:50.052: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:50.061: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:50.069: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:50.073: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:50.103: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:50.108: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:50.112: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:50.116: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:50.124: INFO: Lookups using dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local]

Feb 24 01:31:55.058: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:55.067: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:55.083: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:55.095: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:55.111: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:55.115: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:55.118: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:55.121: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4: the server could not find the requested resource (get pods dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4)
Feb 24 01:31:55.128: INFO: Lookups using dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local]

Feb 24 01:32:00.108: INFO: DNS probes using dns-148/dns-test-3b5cf161-f7c8-4d32-8ea6-d35d404a06f4 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:32:00.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-148" for this suite.

• [SLOW TEST:42.901 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":260,"skipped":4263,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:32:00.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-a10036ea-e950-437e-8ed9-b47be07bdf15
STEP: Creating a pod to test consume configMaps
Feb 24 01:32:00.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4" in namespace "configmap-5174" to be "success or failure"
Feb 24 01:32:00.635: INFO: Pod "pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4": Phase="Pending", Reason="", readiness=false. Elapsed: 29.083364ms
Feb 24 01:32:02.641: INFO: Pod "pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03485499s
Feb 24 01:32:04.645: INFO: Pod "pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039146837s
Feb 24 01:32:06.655: INFO: Pod "pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048798482s
Feb 24 01:32:08.672: INFO: Pod "pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06658202s
Feb 24 01:32:10.678: INFO: Pod "pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072234667s
STEP: Saw pod success
Feb 24 01:32:10.678: INFO: Pod "pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4" satisfied condition "success or failure"
Feb 24 01:32:10.681: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4 container configmap-volume-test: 
STEP: delete the pod
Feb 24 01:32:10.795: INFO: Waiting for pod pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4 to disappear
Feb 24 01:32:10.991: INFO: Pod pod-configmaps-9fd1cbee-93b8-4df9-9475-72c192d722f4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:32:10.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5174" for this suite.

• [SLOW TEST:10.654 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":261,"skipped":4296,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:32:11.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 24 01:32:11.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-9950'
Feb 24 01:32:13.359: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 24 01:32:13.359: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740
Feb 24 01:32:15.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9950'
Feb 24 01:32:15.571: INFO: stderr: ""
Feb 24 01:32:15.571: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:32:15.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9950" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":280,"completed":262,"skipped":4308,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:32:15.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:32:15.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4259" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":263,"skipped":4308,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:32:15.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:32:16.124: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:32:18.131: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:32:20.132: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:32:22.142: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Pending, waiting for it to be Running (with Ready = true)
Feb 24 01:32:24.132: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = false)
Feb 24 01:32:26.132: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = false)
Feb 24 01:32:28.131: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = false)
Feb 24 01:32:30.134: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = false)
Feb 24 01:32:32.134: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = false)
Feb 24 01:32:34.133: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = false)
Feb 24 01:32:36.133: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = false)
Feb 24 01:32:38.134: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = false)
Feb 24 01:32:40.134: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = false)
Feb 24 01:32:42.133: INFO: The status of Pod test-webserver-61173d28-629b-42ee-a5c0-0253e5b528f4 is Running (Ready = true)
Feb 24 01:32:42.139: INFO: Container started at 2020-02-24 01:32:22 +0000 UTC, pod became ready at 2020-02-24 01:32:41 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:32:42.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1508" for this suite.

• [SLOW TEST:26.265 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":264,"skipped":4308,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:32:42.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-beb1076c-6334-4d5d-97c0-162a88e5d83a
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:32:54.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4192" for this suite.

• [SLOW TEST:12.269 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":265,"skipped":4328,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:32:54.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:32:54.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 24 01:32:57.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2299 create -f -'
Feb 24 01:33:00.559: INFO: stderr: ""
Feb 24 01:33:00.559: INFO: stdout: "e2e-test-crd-publish-openapi-5040-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 24 01:33:00.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2299 delete e2e-test-crd-publish-openapi-5040-crds test-cr'
Feb 24 01:33:00.703: INFO: stderr: ""
Feb 24 01:33:00.703: INFO: stdout: "e2e-test-crd-publish-openapi-5040-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Feb 24 01:33:00.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2299 apply -f -'
Feb 24 01:33:01.087: INFO: stderr: ""
Feb 24 01:33:01.087: INFO: stdout: "e2e-test-crd-publish-openapi-5040-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 24 01:33:01.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2299 delete e2e-test-crd-publish-openapi-5040-crds test-cr'
Feb 24 01:33:01.208: INFO: stderr: ""
Feb 24 01:33:01.208: INFO: stdout: "e2e-test-crd-publish-openapi-5040-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 24 01:33:01.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5040-crds'
Feb 24 01:33:01.475: INFO: stderr: ""
Feb 24 01:33:01.475: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5040-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:33:04.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2299" for this suite.

• [SLOW TEST:10.500 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":266,"skipped":4339,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:33:04.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 01:33:05.639: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 01:33:07.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:33:09.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:33:11.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104785, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 01:33:14.689: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a mutating webhook configuration
Feb 24 01:33:14.768: INFO: Waiting for webhook configuration to be ready...
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:33:15.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6101" for this suite.
STEP: Destroying namespace "webhook-6101-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.479 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":267,"skipped":4362,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:33:15.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Feb 24 01:33:15.552: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Feb 24 01:33:16.202: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 24 01:33:18.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:33:20.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:33:22.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:33:24.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:33:26.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104796, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:33:29.041: INFO: Waited 622.262585ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:33:29.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5608" for this suite.

• [SLOW TEST:14.383 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":268,"skipped":4365,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:33:29.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 24 01:33:29.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d" in namespace "downward-api-8803" to be "success or failure"
Feb 24 01:33:30.007: INFO: Pod "downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.913362ms
Feb 24 01:33:32.016: INFO: Pod "downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037493736s
Feb 24 01:33:34.087: INFO: Pod "downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107864392s
Feb 24 01:33:36.092: INFO: Pod "downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113405607s
Feb 24 01:33:38.098: INFO: Pod "downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119020634s
Feb 24 01:33:40.108: INFO: Pod "downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.129109566s
Feb 24 01:33:42.114: INFO: Pod "downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.134881665s
STEP: Saw pod success
Feb 24 01:33:42.114: INFO: Pod "downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d" satisfied condition "success or failure"
Feb 24 01:33:42.116: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d container client-container: 
STEP: delete the pod
Feb 24 01:33:42.190: INFO: Waiting for pod downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d to disappear
Feb 24 01:33:42.194: INFO: Pod downwardapi-volume-7558a4a2-13c9-4fd3-846a-d86bb5ccf62d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:33:42.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8803" for this suite.

• [SLOW TEST:12.409 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":269,"skipped":4368,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:33:42.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 24 01:33:42.262: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 24 01:33:42.323: INFO: Waiting for terminating namespaces to be deleted...
Feb 24 01:33:42.327: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 24 01:33:42.332: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 24 01:33:42.332: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 01:33:42.332: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 24 01:33:42.332: INFO: 	Container weave ready: true, restart count 1
Feb 24 01:33:42.332: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 01:33:42.332: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 24 01:33:42.348: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 24 01:33:42.348: INFO: 	Container coredns ready: true, restart count 0
Feb 24 01:33:42.349: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 24 01:33:42.349: INFO: 	Container coredns ready: true, restart count 0
Feb 24 01:33:42.349: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 24 01:33:42.349: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 24 01:33:42.349: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 24 01:33:42.349: INFO: 	Container weave ready: true, restart count 0
Feb 24 01:33:42.349: INFO: 	Container weave-npc ready: true, restart count 0
Feb 24 01:33:42.349: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 24 01:33:42.349: INFO: 	Container kube-controller-manager ready: true, restart count 17
Feb 24 01:33:42.349: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 24 01:33:42.349: INFO: 	Container kube-scheduler ready: true, restart count 23
Feb 24 01:33:42.349: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 24 01:33:42.349: INFO: 	Container etcd ready: true, restart count 1
Feb 24 01:33:42.349: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 24 01:33:42.349: INFO: 	Container kube-apiserver ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f2938a6a-232c-480d-92fc-4345e0d5f22d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f2938a6a-232c-480d-92fc-4345e0d5f22d off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f2938a6a-232c-480d-92fc-4345e0d5f22d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:33:59.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2264" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:16.946 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":280,"completed":270,"skipped":4379,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:33:59.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 01:33:59.802: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 01:34:01.849: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:34:03.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:34:05.860: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:34:07.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104839, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 01:34:10.939: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:34:10.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4769-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:34:12.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-507" for this suite.
STEP: Destroying namespace "webhook-507-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.501 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":271,"skipped":4383,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:34:12.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 24 01:34:12.814: INFO: Waiting up to 5m0s for pod "downward-api-73df362b-ffef-463b-b67b-45abb81435b6" in namespace "downward-api-5824" to be "success or failure"
Feb 24 01:34:12.909: INFO: Pod "downward-api-73df362b-ffef-463b-b67b-45abb81435b6": Phase="Pending", Reason="", readiness=false. Elapsed: 94.368397ms
Feb 24 01:34:14.919: INFO: Pod "downward-api-73df362b-ffef-463b-b67b-45abb81435b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10420904s
Feb 24 01:34:16.926: INFO: Pod "downward-api-73df362b-ffef-463b-b67b-45abb81435b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111169548s
Feb 24 01:34:18.933: INFO: Pod "downward-api-73df362b-ffef-463b-b67b-45abb81435b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118534146s
Feb 24 01:34:20.939: INFO: Pod "downward-api-73df362b-ffef-463b-b67b-45abb81435b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124517189s
Feb 24 01:34:22.948: INFO: Pod "downward-api-73df362b-ffef-463b-b67b-45abb81435b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133370325s
STEP: Saw pod success
Feb 24 01:34:22.948: INFO: Pod "downward-api-73df362b-ffef-463b-b67b-45abb81435b6" satisfied condition "success or failure"
Feb 24 01:34:22.952: INFO: Trying to get logs from node jerma-node pod downward-api-73df362b-ffef-463b-b67b-45abb81435b6 container dapi-container: 
STEP: delete the pod
Feb 24 01:34:23.238: INFO: Waiting for pod downward-api-73df362b-ffef-463b-b67b-45abb81435b6 to disappear
Feb 24 01:34:23.323: INFO: Pod downward-api-73df362b-ffef-463b-b67b-45abb81435b6 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:34:23.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5824" for this suite.

• [SLOW TEST:10.683 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":272,"skipped":4393,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:34:23.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Feb 24 01:34:23.671: INFO: Waiting up to 5m0s for pod "client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48" in namespace "containers-8740" to be "success or failure"
Feb 24 01:34:23.677: INFO: Pod "client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48": Phase="Pending", Reason="", readiness=false. Elapsed: 5.781665ms
Feb 24 01:34:25.683: INFO: Pod "client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012524368s
Feb 24 01:34:27.692: INFO: Pod "client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02057627s
Feb 24 01:34:29.710: INFO: Pod "client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038733621s
Feb 24 01:34:31.718: INFO: Pod "client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04659504s
STEP: Saw pod success
Feb 24 01:34:31.718: INFO: Pod "client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48" satisfied condition "success or failure"
Feb 24 01:34:31.723: INFO: Trying to get logs from node jerma-node pod client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48 container test-container: 
STEP: delete the pod
Feb 24 01:34:31.793: INFO: Waiting for pod client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48 to disappear
Feb 24 01:34:31.799: INFO: Pod client-containers-421c1698-55c2-4bf0-b1bd-6c63be7edc48 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:34:31.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8740" for this suite.

• [SLOW TEST:8.481 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4403,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:34:31.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-9101
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9101 to expose endpoints map[]
Feb 24 01:34:32.367: INFO: Get endpoints failed (4.31427ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 24 01:34:33.376: INFO: successfully validated that service endpoint-test2 in namespace services-9101 exposes endpoints map[] (1.013202486s elapsed)
STEP: Creating pod pod1 in namespace services-9101
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9101 to expose endpoints map[pod1:[80]]
Feb 24 01:34:37.765: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.375356935s elapsed, will retry)
Feb 24 01:34:40.907: INFO: successfully validated that service endpoint-test2 in namespace services-9101 exposes endpoints map[pod1:[80]] (7.517712445s elapsed)
STEP: Creating pod pod2 in namespace services-9101
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9101 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 24 01:34:45.470: INFO: Unexpected endpoints: found map[548faddb-553b-4a86-a609-ae391f1a45b0:[80]], expected map[pod1:[80] pod2:[80]] (4.515111869s elapsed, will retry)
Feb 24 01:34:49.543: INFO: successfully validated that service endpoint-test2 in namespace services-9101 exposes endpoints map[pod1:[80] pod2:[80]] (8.588059849s elapsed)
STEP: Deleting pod pod1 in namespace services-9101
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9101 to expose endpoints map[pod2:[80]]
Feb 24 01:34:49.623: INFO: successfully validated that service endpoint-test2 in namespace services-9101 exposes endpoints map[pod2:[80]] (62.482633ms elapsed)
STEP: Deleting pod pod2 in namespace services-9101
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9101 to expose endpoints map[]
Feb 24 01:34:50.655: INFO: successfully validated that service endpoint-test2 in namespace services-9101 exposes endpoints map[] (1.018435057s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:34:50.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9101" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:19.038 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":280,"completed":274,"skipped":4410,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:34:50.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 24 01:34:54.878: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 24 01:34:57.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104894, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104897, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104897, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:34:59.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104897, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104897, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104897, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 24 01:35:01.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104897, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104897, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104897, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718104894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 24 01:35:04.892: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 24 01:35:04.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2864-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:35:06.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2665" for this suite.
STEP: Destroying namespace "webhook-2665-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:15.768 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":275,"skipped":4432,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:35:06.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 24 01:35:06.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4" in namespace "projected-1435" to be "success or failure"
Feb 24 01:35:06.778: INFO: Pod "downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.234944ms
Feb 24 01:35:08.787: INFO: Pod "downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023106132s
Feb 24 01:35:10.795: INFO: Pod "downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030645977s
Feb 24 01:35:12.806: INFO: Pod "downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041889243s
Feb 24 01:35:14.814: INFO: Pod "downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049475809s
Feb 24 01:35:16.824: INFO: Pod "downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060122839s
STEP: Saw pod success
Feb 24 01:35:16.824: INFO: Pod "downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4" satisfied condition "success or failure"
Feb 24 01:35:16.835: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4 container client-container: 
STEP: delete the pod
Feb 24 01:35:16.969: INFO: Waiting for pod downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4 to disappear
Feb 24 01:35:16.974: INFO: Pod downwardapi-volume-16fd0f6d-14ee-4bb9-85ad-de0d5766f5b4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:35:16.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1435" for this suite.

• [SLOW TEST:10.401 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":276,"skipped":4508,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:35:17.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: executing a command with run --rm and attach with stdin
Feb 24 01:35:17.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8031 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 24 01:35:26.949: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0224 01:35:25.793268    4631 log.go:172] (0xc0004ca0b0) (0xc000946280) Create stream\nI0224 01:35:25.793421    4631 log.go:172] (0xc0004ca0b0) (0xc000946280) Stream added, broadcasting: 1\nI0224 01:35:25.798246    4631 log.go:172] (0xc0004ca0b0) Reply frame received for 1\nI0224 01:35:25.798287    4631 log.go:172] (0xc0004ca0b0) (0xc0005c3b80) Create stream\nI0224 01:35:25.798294    4631 log.go:172] (0xc0004ca0b0) (0xc0005c3b80) Stream added, broadcasting: 3\nI0224 01:35:25.799904    4631 log.go:172] (0xc0004ca0b0) Reply frame received for 3\nI0224 01:35:25.799963    4631 log.go:172] (0xc0004ca0b0) (0xc0007d80a0) Create stream\nI0224 01:35:25.799981    4631 log.go:172] (0xc0004ca0b0) (0xc0007d80a0) Stream added, broadcasting: 5\nI0224 01:35:25.801856    4631 log.go:172] (0xc0004ca0b0) Reply frame received for 5\nI0224 01:35:25.801874    4631 log.go:172] (0xc0004ca0b0) (0xc000946320) Create stream\nI0224 01:35:25.801879    4631 log.go:172] (0xc0004ca0b0) (0xc000946320) Stream added, broadcasting: 7\nI0224 01:35:25.804586    4631 log.go:172] (0xc0004ca0b0) Reply frame received for 7\nI0224 01:35:25.805021    4631 log.go:172] (0xc0005c3b80) (3) Writing data frame\nI0224 01:35:25.805305    4631 log.go:172] (0xc0005c3b80) (3) Writing data frame\nI0224 01:35:25.809213    4631 log.go:172] (0xc0004ca0b0) Data frame received for 5\nI0224 01:35:25.809240    4631 log.go:172] (0xc0007d80a0) (5) Data frame handling\nI0224 01:35:25.809264    4631 log.go:172] (0xc0007d80a0) (5) Data frame sent\nI0224 01:35:25.812820    4631 log.go:172] (0xc0004ca0b0) Data frame received for 5\nI0224 01:35:25.812838    4631 log.go:172] (0xc0007d80a0) (5) Data frame handling\nI0224 01:35:25.812854    4631 log.go:172] (0xc0007d80a0) (5) Data frame sent\nI0224 01:35:26.837423    4631 log.go:172] (0xc0004ca0b0) Data frame received for 1\nI0224 01:35:26.837507    4631 log.go:172] (0xc000946280) (1) Data frame handling\nI0224 01:35:26.837734    4631 log.go:172] (0xc000946280) (1) Data frame sent\nI0224 01:35:26.837793    4631 log.go:172] (0xc0004ca0b0) (0xc000946280) Stream removed, broadcasting: 1\nI0224 01:35:26.838474    4631 log.go:172] (0xc0004ca0b0) (0xc0007d80a0) Stream removed, broadcasting: 5\nI0224 01:35:26.838922    4631 log.go:172] (0xc0004ca0b0) (0xc0005c3b80) Stream removed, broadcasting: 3\nI0224 01:35:26.839184    4631 log.go:172] (0xc0004ca0b0) (0xc000946320) Stream removed, broadcasting: 7\nI0224 01:35:26.839365    4631 log.go:172] (0xc0004ca0b0) Go away received\nI0224 01:35:26.839425    4631 log.go:172] (0xc0004ca0b0) (0xc000946280) Stream removed, broadcasting: 1\nI0224 01:35:26.839459    4631 log.go:172] (0xc0004ca0b0) (0xc0005c3b80) Stream removed, broadcasting: 3\nI0224 01:35:26.839473    4631 log.go:172] (0xc0004ca0b0) (0xc0007d80a0) Stream removed, broadcasting: 5\nI0224 01:35:26.839498    4631 log.go:172] (0xc0004ca0b0) (0xc000946320) Stream removed, broadcasting: 7\n"
Feb 24 01:35:26.949: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:35:30.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8031" for this suite.

• [SLOW TEST:13.937 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":280,"completed":277,"skipped":4531,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:35:30.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-6149
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-6149
I0224 01:35:31.348847      10 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6149, replica count: 2
I0224 01:35:34.400666      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:35:37.401619      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:35:40.403306      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0224 01:35:43.404359      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 24 01:35:43.404: INFO: Creating new exec pod
Feb 24 01:35:52.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6149 execpod6w4p9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 24 01:35:52.895: INFO: stderr: "I0224 01:35:52.680571    4655 log.go:172] (0xc000ae2bb0) (0xc000a72780) Create stream\nI0224 01:35:52.680661    4655 log.go:172] (0xc000ae2bb0) (0xc000a72780) Stream added, broadcasting: 1\nI0224 01:35:52.688328    4655 log.go:172] (0xc000ae2bb0) Reply frame received for 1\nI0224 01:35:52.688389    4655 log.go:172] (0xc000ae2bb0) (0xc00061a780) Create stream\nI0224 01:35:52.688401    4655 log.go:172] (0xc000ae2bb0) (0xc00061a780) Stream added, broadcasting: 3\nI0224 01:35:52.689875    4655 log.go:172] (0xc000ae2bb0) Reply frame received for 3\nI0224 01:35:52.689904    4655 log.go:172] (0xc000ae2bb0) (0xc0003eb400) Create stream\nI0224 01:35:52.689919    4655 log.go:172] (0xc000ae2bb0) (0xc0003eb400) Stream added, broadcasting: 5\nI0224 01:35:52.691406    4655 log.go:172] (0xc000ae2bb0) Reply frame received for 5\nI0224 01:35:52.780006    4655 log.go:172] (0xc000ae2bb0) Data frame received for 5\nI0224 01:35:52.780149    4655 log.go:172] (0xc0003eb400) (5) Data frame handling\nI0224 01:35:52.780165    4655 log.go:172] (0xc0003eb400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0224 01:35:52.788316    4655 log.go:172] (0xc000ae2bb0) Data frame received for 5\nI0224 01:35:52.788343    4655 log.go:172] (0xc0003eb400) (5) Data frame handling\nI0224 01:35:52.788362    4655 log.go:172] (0xc0003eb400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0224 01:35:52.888007    4655 log.go:172] (0xc000ae2bb0) (0xc00061a780) Stream removed, broadcasting: 3\nI0224 01:35:52.888174    4655 log.go:172] (0xc000ae2bb0) Data frame received for 1\nI0224 01:35:52.888208    4655 log.go:172] (0xc000ae2bb0) (0xc0003eb400) Stream removed, broadcasting: 5\nI0224 01:35:52.888254    4655 log.go:172] (0xc000a72780) (1) Data frame handling\nI0224 01:35:52.888305    4655 log.go:172] (0xc000a72780) (1) Data frame sent\nI0224 01:35:52.888320    4655 log.go:172] (0xc000ae2bb0) (0xc000a72780) Stream removed, broadcasting: 1\nI0224 01:35:52.888332    4655 log.go:172] (0xc000ae2bb0) Go away received\nI0224 01:35:52.888884    4655 log.go:172] (0xc000ae2bb0) (0xc000a72780) Stream removed, broadcasting: 1\nI0224 01:35:52.888939    4655 log.go:172] (0xc000ae2bb0) (0xc00061a780) Stream removed, broadcasting: 3\nI0224 01:35:52.888952    4655 log.go:172] (0xc000ae2bb0) (0xc0003eb400) Stream removed, broadcasting: 5\n"
Feb 24 01:35:52.896: INFO: stdout: ""
Feb 24 01:35:52.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6149 execpod6w4p9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.235.15 80'
Feb 24 01:35:53.230: INFO: stderr: "I0224 01:35:53.043488    4676 log.go:172] (0xc000a06e70) (0xc0009fa000) Create stream\nI0224 01:35:53.043715    4676 log.go:172] (0xc000a06e70) (0xc0009fa000) Stream added, broadcasting: 1\nI0224 01:35:53.046493    4676 log.go:172] (0xc000a06e70) Reply frame received for 1\nI0224 01:35:53.046566    4676 log.go:172] (0xc000a06e70) (0xc000930000) Create stream\nI0224 01:35:53.046580    4676 log.go:172] (0xc000a06e70) (0xc000930000) Stream added, broadcasting: 3\nI0224 01:35:53.047788    4676 log.go:172] (0xc000a06e70) Reply frame received for 3\nI0224 01:35:53.047805    4676 log.go:172] (0xc000a06e70) (0xc0009fa0a0) Create stream\nI0224 01:35:53.047810    4676 log.go:172] (0xc000a06e70) (0xc0009fa0a0) Stream added, broadcasting: 5\nI0224 01:35:53.049267    4676 log.go:172] (0xc000a06e70) Reply frame received for 5\nI0224 01:35:53.136148    4676 log.go:172] (0xc000a06e70) Data frame received for 5\nI0224 01:35:53.136198    4676 log.go:172] (0xc0009fa0a0) (5) Data frame handling\nI0224 01:35:53.136219    4676 log.go:172] (0xc0009fa0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.235.15 80\nI0224 01:35:53.137752    4676 log.go:172] (0xc000a06e70) Data frame received for 5\nI0224 01:35:53.137787    4676 log.go:172] (0xc0009fa0a0) (5) Data frame handling\nI0224 01:35:53.137800    4676 log.go:172] (0xc0009fa0a0) (5) Data frame sent\nConnection to 10.96.235.15 80 port [tcp/http] succeeded!\nI0224 01:35:53.221662    4676 log.go:172] (0xc000a06e70) Data frame received for 1\nI0224 01:35:53.221728    4676 log.go:172] (0xc000a06e70) (0xc0009fa0a0) Stream removed, broadcasting: 5\nI0224 01:35:53.221802    4676 log.go:172] (0xc0009fa000) (1) Data frame handling\nI0224 01:35:53.221817    4676 log.go:172] (0xc000a06e70) (0xc000930000) Stream removed, broadcasting: 3\nI0224 01:35:53.221830    4676 log.go:172] (0xc0009fa000) (1) Data frame sent\nI0224 01:35:53.221840    4676 log.go:172] (0xc000a06e70) (0xc0009fa000) Stream removed, broadcasting: 1\nI0224 01:35:53.221849    4676 log.go:172] (0xc000a06e70) Go away received\nI0224 01:35:53.222244    4676 log.go:172] (0xc000a06e70) (0xc0009fa000) Stream removed, broadcasting: 1\nI0224 01:35:53.222253    4676 log.go:172] (0xc000a06e70) (0xc000930000) Stream removed, broadcasting: 3\nI0224 01:35:53.222259    4676 log.go:172] (0xc000a06e70) (0xc0009fa0a0) Stream removed, broadcasting: 5\n"
Feb 24 01:35:53.231: INFO: stdout: ""
Feb 24 01:35:53.231: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:35:53.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6149" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:22.317 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":278,"skipped":4534,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 24 01:35:53.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 24 01:35:53.415: INFO: Waiting up to 5m0s for pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691" in namespace "emptydir-9846" to be "success or failure"
Feb 24 01:35:53.437: INFO: Pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691": Phase="Pending", Reason="", readiness=false. Elapsed: 22.262538ms
Feb 24 01:35:56.315: INFO: Pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691": Phase="Pending", Reason="", readiness=false. Elapsed: 2.90016561s
Feb 24 01:35:58.321: INFO: Pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691": Phase="Pending", Reason="", readiness=false. Elapsed: 4.906014122s
Feb 24 01:36:00.331: INFO: Pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691": Phase="Pending", Reason="", readiness=false. Elapsed: 6.916320339s
Feb 24 01:36:02.905: INFO: Pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691": Phase="Pending", Reason="", readiness=false. Elapsed: 9.490267737s
Feb 24 01:36:04.914: INFO: Pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691": Phase="Pending", Reason="", readiness=false. Elapsed: 11.498994107s
Feb 24 01:36:06.926: INFO: Pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691": Phase="Pending", Reason="", readiness=false. Elapsed: 13.51075248s
Feb 24 01:36:08.934: INFO: Pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.518689797s
STEP: Saw pod success
Feb 24 01:36:08.934: INFO: Pod "pod-50350502-67a0-47d9-a6ee-289d1fcc2691" satisfied condition "success or failure"
Feb 24 01:36:08.940: INFO: Trying to get logs from node jerma-node pod pod-50350502-67a0-47d9-a6ee-289d1fcc2691 container test-container: 
STEP: delete the pod
Feb 24 01:36:09.352: INFO: Waiting for pod pod-50350502-67a0-47d9-a6ee-289d1fcc2691 to disappear
Feb 24 01:36:09.364: INFO: Pod pod-50350502-67a0-47d9-a6ee-289d1fcc2691 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 24 01:36:09.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9846" for this suite.

• [SLOW TEST:16.091 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":279,"skipped":4552,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSFeb 24 01:36:09.383: INFO: Running AfterSuite actions on all nodes
Feb 24 01:36:09.383: INFO: Running AfterSuite actions on node 1
Feb 24 01:36:09.383: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":280,"completed":279,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339

Ran 280 of 4845 Specs in 7041.684 seconds
FAIL! -- 279 Passed | 1 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (7041.84s)
FAIL