I0108 21:08:27.144816 9 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0108 21:08:27.145335 9 e2e.go:109] Starting e2e run "b1feba36-0fc8-4ca2-a7ec-3dd412c5917a" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578517705 - Will randomize all specs Will run 278 of 4814 specs Jan 8 21:08:27.209: INFO: >>> kubeConfig: /root/.kube/config Jan 8 21:08:27.213: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 8 21:08:27.245: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 8 21:08:27.286: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 8 21:08:27.286: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 8 21:08:27.287: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 8 21:08:27.294: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 8 21:08:27.294: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 8 21:08:27.294: INFO: e2e test version: v1.17.0 Jan 8 21:08:27.296: INFO: kube-apiserver version: v1.17.0 Jan 8 21:08:27.296: INFO: >>> kubeConfig: /root/.kube/config Jan 8 21:08:27.304: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:08:27.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Jan 8 21:08:27.376: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:09:09.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4914" for this suite. • [SLOW TEST:41.960 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:09:09.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 8 21:09:09.438: INFO: Waiting up to 5m0s for pod "pod-cf489984-7001-416c-ad77-d82b1877bfc2" in namespace "emptydir-8523" to be "success or failure" Jan 8 21:09:09.564: INFO: Pod "pod-cf489984-7001-416c-ad77-d82b1877bfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 125.887295ms Jan 8 21:09:11.571: INFO: Pod "pod-cf489984-7001-416c-ad77-d82b1877bfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13270889s Jan 8 21:09:13.577: INFO: Pod "pod-cf489984-7001-416c-ad77-d82b1877bfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138311395s Jan 8 21:09:15.590: INFO: Pod "pod-cf489984-7001-416c-ad77-d82b1877bfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152078187s Jan 8 21:09:17.601: INFO: Pod "pod-cf489984-7001-416c-ad77-d82b1877bfc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162478187s STEP: Saw pod success Jan 8 21:09:17.601: INFO: Pod "pod-cf489984-7001-416c-ad77-d82b1877bfc2" satisfied condition "success or failure" Jan 8 21:09:17.607: INFO: Trying to get logs from node jerma-node pod pod-cf489984-7001-416c-ad77-d82b1877bfc2 container test-container: STEP: delete the pod Jan 8 21:09:17.877: INFO: Waiting for pod pod-cf489984-7001-416c-ad77-d82b1877bfc2 to disappear Jan 8 21:09:17.891: INFO: Pod pod-cf489984-7001-416c-ad77-d82b1877bfc2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:09:17.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8523" for this suite. • [SLOW TEST:8.675 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":38,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:09:17.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:09:17.997: INFO: Creating deployment "test-recreate-deployment" Jan 8 21:09:18.010: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 8 21:09:18.070: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 8 21:09:20.079: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 8 21:09:20.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:22.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:24.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114558, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:26.093: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 8 21:09:26.110: INFO: Updating deployment test-recreate-deployment Jan 8 21:09:26.110: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 8 21:09:26.503: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1819 /apis/apps/v1/namespaces/deployment-1819/deployments/test-recreate-deployment 915568e8-ba31-4ea3-a91b-0a237dcfb4d1 881870 2 2020-01-08 21:09:17 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001915208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-08 21:09:26 +0000 UTC,LastTransitionTime:2020-01-08 21:09:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-08 21:09:26 +0000 UTC,LastTransitionTime:2020-01-08 21:09:18 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 8 21:09:26.532: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1819 /apis/apps/v1/namespaces/deployment-1819/replicasets/test-recreate-deployment-5f94c574ff 7e3b50a0-e669-45da-b823-3dbe10efda8d 881868 1 2020-01-08 21:09:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 915568e8-ba31-4ea3-a91b-0a237dcfb4d1 0xc001915597 0xc001915598}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0019155f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 8 21:09:26.532: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 8 21:09:26.532: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1819 /apis/apps/v1/namespaces/deployment-1819/replicasets/test-recreate-deployment-799c574856 5ea66750-d1fe-470c-aebc-59263bae1e36 881859 2 2020-01-08 21:09:18 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 915568e8-ba31-4ea3-a91b-0a237dcfb4d1 0xc001915667 0xc001915668}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0019156d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 8 21:09:26.612: INFO: Pod "test-recreate-deployment-5f94c574ff-f2ghg" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-f2ghg test-recreate-deployment-5f94c574ff- deployment-1819 /api/v1/namespaces/deployment-1819/pods/test-recreate-deployment-5f94c574ff-f2ghg 216fceca-2f8a-439a-8b1a-393dc980f336 881865 0 2020-01-08 21:09:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 7e3b50a0-e669-45da-b823-3dbe10efda8d 0xc00174d2b7 0xc00174d2b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wj5s4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wj5s4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wj5s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:09:26.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1819" for this suite. • [SLOW TEST:8.688 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":3,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:09:26.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 8 21:09:26.791: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4151 /api/v1/namespaces/watch-4151/configmaps/e2e-watch-test-watch-closed 29a01606-289e-49e8-b434-598d85a09a74 881877 0 2020-01-08 21:09:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 8 21:09:26.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4151 /api/v1/namespaces/watch-4151/configmaps/e2e-watch-test-watch-closed 29a01606-289e-49e8-b434-598d85a09a74 881878 0 2020-01-08 21:09:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 8 21:09:26.815: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4151 /api/v1/namespaces/watch-4151/configmaps/e2e-watch-test-watch-closed 29a01606-289e-49e8-b434-598d85a09a74 881879 0 2020-01-08 21:09:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 8 21:09:26.815: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4151 /api/v1/namespaces/watch-4151/configmaps/e2e-watch-test-watch-closed 29a01606-289e-49e8-b434-598d85a09a74 881880 0 2020-01-08 21:09:26 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:09:26.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4151" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":4,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:09:26.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:09:27.157: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 8 21:09:39.180: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 8 21:09:41.186: INFO: Creating deployment "test-rollover-deployment" Jan 8 21:09:41.209: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 8 21:09:43.280: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 8 21:09:43.288: INFO: Ensure that both replica sets have 1 created replica Jan 8 21:09:43.295: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 8 21:09:43.303: INFO: Updating deployment test-rollover-deployment Jan 8 21:09:43.303: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 8 21:09:45.327: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 8 21:09:45.338: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 8 21:09:45.366: INFO: all replica sets need to contain the pod-template-hash label Jan 8 21:09:45.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114583, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:47.379: INFO: all replica sets need to contain the pod-template-hash label Jan 8 21:09:47.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114583, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:49.378: INFO: all replica sets need to contain the pod-template-hash label Jan 8 21:09:49.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114583, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:51.384: INFO: all replica sets need to contain the pod-template-hash label Jan 8 21:09:51.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114590, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:53.379: INFO: all replica sets need to contain the pod-template-hash label Jan 8 21:09:53.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114590, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:55.384: INFO: all replica sets need to contain the pod-template-hash label Jan 8 21:09:55.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114590, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:57.375: INFO: all replica sets need to contain the pod-template-hash label Jan 8 21:09:57.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114590, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:09:59.382: INFO: all replica sets need to contain the pod-template-hash label Jan 8 21:09:59.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114590, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114581, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:10:01.381: INFO: Jan 8 21:10:01.381: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 8 21:10:01.393: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5116 /apis/apps/v1/namespaces/deployment-5116/deployments/test-rollover-deployment 1994b8fd-6b67-4757-8b2c-154af99579f9 882061 2 2020-01-08 21:09:41 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023b60f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-08 21:09:41 +0000 UTC,LastTransitionTime:2020-01-08 21:09:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-08 21:10:00 +0000 UTC,LastTransitionTime:2020-01-08 21:09:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 8 21:10:01.397: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-5116 /apis/apps/v1/namespaces/deployment-5116/replicasets/test-rollover-deployment-574d6dfbff a3edaee5-74fa-46e5-b0a4-1664f350cc57 882051 2 2020-01-08 21:09:43 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 1994b8fd-6b67-4757-8b2c-154af99579f9 0xc0023b6567 0xc0023b6568}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023b65d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 8 21:10:01.397: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 8 21:10:01.398: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5116 /apis/apps/v1/namespaces/deployment-5116/replicasets/test-rollover-controller 5c6f03b7-60bd-4322-a841-727e15c13027 882060 2 2020-01-08 21:09:26 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 1994b8fd-6b67-4757-8b2c-154af99579f9 0xc0023b6497 0xc0023b6498}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0023b64f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 8 21:10:01.398: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5116 /apis/apps/v1/namespaces/deployment-5116/replicasets/test-rollover-deployment-f6c94f66c 6f3e6a23-9e2b-41a2-a651-23cf09a46b47 882002 2 2020-01-08 21:09:41 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 1994b8fd-6b67-4757-8b2c-154af99579f9 0xc0023b6640 0xc0023b6641}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023b66b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 8 21:10:01.403: INFO: Pod "test-rollover-deployment-574d6dfbff-bvljg" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-bvljg test-rollover-deployment-574d6dfbff- deployment-5116 /api/v1/namespaces/deployment-5116/pods/test-rollover-deployment-574d6dfbff-bvljg 3822abf6-6e20-427e-a533-2e2f33515d42 882027 0 2020-01-08 21:09:43 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff a3edaee5-74fa-46e5-b0a4-1664f350cc57 0xc00225acf7 0xc00225acf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mbgzw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mbgzw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mbgzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:09:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:09:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:09:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:09:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-08 21:09:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:09:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://71c17988ba7c675681fe2b1aad809c67aee1f2604729593f772140134c96715e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:10:01.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5116" for this suite. • [SLOW TEST:34.584 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":5,"skipped":94,"failed":0} [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:10:01.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:10:27.659: INFO: Container started at 2020-01-08 21:10:08 +0000 UTC, pod became ready at 2020-01-08 21:10:26 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:10:27.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1656" for this suite. • [SLOW TEST:26.261 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:10:27.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 8 21:10:27.789: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479" in namespace "downward-api-2472" to be "success or failure" Jan 8 21:10:27.817: INFO: Pod "downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479": Phase="Pending", Reason="", readiness=false. Elapsed: 28.281172ms Jan 8 21:10:29.826: INFO: Pod "downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036720037s Jan 8 21:10:31.834: INFO: Pod "downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044643073s Jan 8 21:10:33.843: INFO: Pod "downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054387683s Jan 8 21:10:35.858: INFO: Pod "downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068775875s STEP: Saw pod success Jan 8 21:10:35.858: INFO: Pod "downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479" satisfied condition "success or failure" Jan 8 21:10:35.865: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479 container client-container: STEP: delete the pod Jan 8 21:10:36.002: INFO: Waiting for pod downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479 to disappear Jan 8 21:10:36.019: INFO: Pod downwardapi-volume-46372ce0-cb90-4400-ac38-d887924fc479 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:10:36.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2472" for this suite. • [SLOW TEST:8.352 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:10:36.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 8 21:10:44.235: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 8 21:10:54.400: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:10:54.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-637" for this suite. • [SLOW TEST:18.384 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":8,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:10:54.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 8 21:10:54.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515" in namespace "projected-5843" to be "success or failure" Jan 8 21:10:54.589: INFO: Pod "downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515": Phase="Pending", Reason="", readiness=false. Elapsed: 49.104925ms Jan 8 21:10:56.599: INFO: Pod "downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059008147s Jan 8 21:10:58.609: INFO: Pod "downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069725912s Jan 8 21:11:00.618: INFO: Pod "downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078624227s Jan 8 21:11:02.627: INFO: Pod "downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086846438s STEP: Saw pod success Jan 8 21:11:02.627: INFO: Pod "downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515" satisfied condition "success or failure" Jan 8 21:11:02.630: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515 container client-container: STEP: delete the pod Jan 8 21:11:02.731: INFO: Waiting for pod downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515 to disappear Jan 8 21:11:02.737: INFO: Pod downwardapi-volume-93c72dbd-cc78-46e4-8feb-db6cb1015515 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:11:02.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5843" for this suite. • [SLOW TEST:8.327 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:11:02.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5112 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-5112 Jan 8 21:11:02.859: INFO: Found 0 stateful pods, waiting for 1 Jan 8 21:11:12.879: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 8 21:11:12.942: INFO: Deleting all statefulset in ns statefulset-5112 Jan 8 21:11:12.956: INFO: Scaling statefulset ss to 0 Jan 8 21:11:23.044: INFO: Waiting for statefulset status.replicas updated to 0 Jan 8 21:11:23.051: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:11:23.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5112" for this suite. • [SLOW TEST:20.397 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":10,"skipped":234,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:11:23.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 8 21:11:23.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1" in namespace "projected-5048" to be "success or failure" Jan 8 21:11:23.398: INFO: Pod "downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.662971ms Jan 8 21:11:25.406: INFO: Pod "downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027879546s Jan 8 21:11:27.421: INFO: Pod "downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042446634s Jan 8 21:11:29.436: INFO: Pod "downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057990885s Jan 8 21:11:31.459: INFO: Pod "downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080338737s STEP: Saw pod success Jan 8 21:11:31.459: INFO: Pod "downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1" satisfied condition "success or failure" Jan 8 21:11:31.463: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1 container client-container: STEP: delete the pod Jan 8 21:11:31.499: INFO: Waiting for pod downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1 to disappear Jan 8 21:11:31.507: INFO: Pod downwardapi-volume-5c55d13f-18b9-4a88-995d-e9fc444a4fa1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:11:31.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5048" for this suite. • [SLOW TEST:8.378 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:11:31.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 8 21:11:31.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8684' Jan 8 21:11:33.645: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 8 21:11:33.645: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Jan 8 21:11:33.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-8684' Jan 8 21:11:33.815: INFO: stderr: "" Jan 8 21:11:33.816: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:11:33.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8684" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":12,"skipped":319,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:11:33.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1208/configmap-test-28022b42-701e-4b00-8984-91f4cd32e3ee STEP: Creating a pod to test consume configMaps Jan 8 21:11:33.953: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6" in namespace "configmap-1208" to be "success or failure" Jan 8 21:11:33.997: INFO: Pod "pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 44.331775ms Jan 8 21:11:36.005: INFO: Pod "pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052335287s Jan 8 21:11:38.010: INFO: Pod "pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056831942s Jan 8 21:11:40.017: INFO: Pod "pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063457373s Jan 8 21:11:42.022: INFO: Pod "pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06902795s Jan 8 21:11:44.030: INFO: Pod "pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076375896s STEP: Saw pod success Jan 8 21:11:44.030: INFO: Pod "pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6" satisfied condition "success or failure" Jan 8 21:11:44.033: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6 container env-test: STEP: delete the pod Jan 8 21:11:44.073: INFO: Waiting for pod pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6 to disappear Jan 8 21:11:44.091: INFO: Pod pod-configmaps-7c4fa724-1e3a-44b6-af59-7a2c6a46d1c6 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:11:44.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1208" for this suite. • [SLOW TEST:10.226 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:11:44.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:12:00.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2430" for this suite. • [SLOW TEST:16.185 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":14,"skipped":357,"failed":0} [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:12:00.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8602 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8602 I0108 21:12:00.650786 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8602, replica count: 2 I0108 21:12:03.702271 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0108 21:12:06.702763 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0108 21:12:09.703082 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 8 21:12:09.703: INFO: Creating new exec pod Jan 8 21:12:16.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8602 execpodskxxv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 8 21:12:17.233: INFO: stderr: "I0108 21:12:16.999719 99 log.go:172] (0xc0009b8bb0) (0xc000a70500) Create stream\nI0108 21:12:17.000209 99 log.go:172] (0xc0009b8bb0) (0xc000a70500) Stream added, broadcasting: 1\nI0108 21:12:17.005129 99 log.go:172] (0xc0009b8bb0) Reply frame received for 1\nI0108 21:12:17.005273 99 log.go:172] (0xc0009b8bb0) (0xc0009a4000) Create stream\nI0108 21:12:17.005297 99 log.go:172] (0xc0009b8bb0) (0xc0009a4000) Stream added, broadcasting: 3\nI0108 21:12:17.007262 99 log.go:172] (0xc0009b8bb0) Reply frame received for 3\nI0108 21:12:17.007321 99 log.go:172] (0xc0009b8bb0) (0xc0009de000) Create stream\nI0108 21:12:17.007350 99 log.go:172] (0xc0009b8bb0) (0xc0009de000) Stream added, broadcasting: 5\nI0108 21:12:17.008814 99 log.go:172] (0xc0009b8bb0) Reply frame received for 5\nI0108 21:12:17.088588 99 log.go:172] (0xc0009b8bb0) Data frame received for 5\nI0108 21:12:17.088770 99 log.go:172] (0xc0009de000) (5) Data frame handling\nI0108 21:12:17.088815 99 log.go:172] (0xc0009de000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0108 21:12:17.115645 99 log.go:172] (0xc0009b8bb0) Data frame received for 5\nI0108 21:12:17.115702 99 log.go:172] (0xc0009de000) (5) Data frame handling\nI0108 21:12:17.115717 99 log.go:172] (0xc0009de000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0108 21:12:17.220215 99 log.go:172] (0xc0009b8bb0) Data frame received for 1\nI0108 21:12:17.220371 99 log.go:172] (0xc0009b8bb0) (0xc0009de000) Stream removed, broadcasting: 5\nI0108 21:12:17.220489 99 log.go:172] (0xc0009b8bb0) (0xc0009a4000) Stream removed, broadcasting: 3\nI0108 21:12:17.220978 99 log.go:172] (0xc000a70500) (1) Data frame handling\nI0108 21:12:17.221046 99 log.go:172] (0xc000a70500) (1) Data frame sent\nI0108 21:12:17.221223 99 log.go:172] (0xc0009b8bb0) (0xc000a70500) Stream removed, broadcasting: 1\nI0108 21:12:17.221255 99 log.go:172] (0xc0009b8bb0) Go away received\nI0108 21:12:17.222485 99 log.go:172] (0xc0009b8bb0) (0xc000a70500) Stream removed, broadcasting: 1\nI0108 21:12:17.222657 99 log.go:172] (0xc0009b8bb0) (0xc0009a4000) Stream removed, broadcasting: 3\nI0108 21:12:17.222666 99 log.go:172] (0xc0009b8bb0) (0xc0009de000) Stream removed, broadcasting: 5\n" Jan 8 21:12:17.233: INFO: stdout: "" Jan 8 21:12:17.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8602 execpodskxxv -- /bin/sh -x -c nc -zv -t -w 2 10.96.85.42 80' Jan 8 21:12:17.500: INFO: stderr: "I0108 21:12:17.360193 119 log.go:172] (0xc0000f4370) (0xc0006d3c20) Create stream\nI0108 21:12:17.360357 119 log.go:172] (0xc0000f4370) (0xc0006d3c20) Stream added, broadcasting: 1\nI0108 21:12:17.365527 119 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0108 21:12:17.365559 119 log.go:172] (0xc0000f4370) (0xc0006646e0) Create stream\nI0108 21:12:17.365565 119 log.go:172] (0xc0000f4370) (0xc0006646e0) Stream added, broadcasting: 3\nI0108 21:12:17.366739 119 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0108 21:12:17.366758 119 log.go:172] (0xc0000f4370) (0xc0006d3cc0) Create stream\nI0108 21:12:17.366764 119 log.go:172] (0xc0000f4370) (0xc0006d3cc0) Stream added, broadcasting: 5\nI0108 21:12:17.367923 119 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0108 21:12:17.424685 119 log.go:172] (0xc0000f4370) Data frame received for 5\nI0108 21:12:17.424742 119 log.go:172] (0xc0006d3cc0) (5) Data frame handling\nI0108 21:12:17.424768 119 log.go:172] (0xc0006d3cc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.85.42 80\nI0108 21:12:17.425330 119 log.go:172] (0xc0000f4370) Data frame received for 5\nI0108 21:12:17.425349 119 log.go:172] (0xc0006d3cc0) (5) Data frame handling\nI0108 21:12:17.425362 119 log.go:172] (0xc0006d3cc0) (5) Data frame sent\nConnection to 10.96.85.42 80 port [tcp/http] succeeded!\nI0108 21:12:17.491600 119 log.go:172] (0xc0000f4370) Data frame received for 1\nI0108 21:12:17.491878 119 log.go:172] (0xc0000f4370) (0xc0006d3cc0) Stream removed, broadcasting: 5\nI0108 21:12:17.491935 119 log.go:172] (0xc0006d3c20) (1) Data frame handling\nI0108 21:12:17.491953 119 log.go:172] (0xc0006d3c20) (1) Data frame sent\nI0108 21:12:17.491978 119 log.go:172] (0xc0000f4370) (0xc0006646e0) Stream removed, broadcasting: 3\nI0108 21:12:17.492002 119 log.go:172] (0xc0000f4370) (0xc0006d3c20) Stream removed, broadcasting: 1\nI0108 21:12:17.492023 119 log.go:172] (0xc0000f4370) Go away received\nI0108 21:12:17.492933 119 log.go:172] (0xc0000f4370) (0xc0006d3c20) Stream removed, broadcasting: 1\nI0108 21:12:17.492957 119 log.go:172] (0xc0000f4370) (0xc0006646e0) Stream removed, broadcasting: 3\nI0108 21:12:17.492967 119 log.go:172] (0xc0000f4370) (0xc0006d3cc0) Stream removed, broadcasting: 5\n" Jan 8 21:12:17.501: INFO: stdout: "" Jan 8 21:12:17.501: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:12:17.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8602" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:17.261 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":15,"skipped":357,"failed":0} SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:12:17.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 8 21:12:17.725: INFO: Waiting up to 5m0s for pod "downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4" in namespace "downward-api-4680" to be "success or failure" Jan 8 21:12:17.729: INFO: Pod "downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.91728ms Jan 8 21:12:19.735: INFO: Pod "downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009424043s Jan 8 21:12:21.741: INFO: Pod "downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015871338s Jan 8 21:12:23.748: INFO: Pod "downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022855811s Jan 8 21:12:26.100: INFO: Pod "downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.374988363s Jan 8 21:12:28.123: INFO: Pod "downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.397524667s STEP: Saw pod success Jan 8 21:12:28.123: INFO: Pod "downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4" satisfied condition "success or failure" Jan 8 21:12:28.129: INFO: Trying to get logs from node jerma-node pod downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4 container dapi-container: STEP: delete the pod Jan 8 21:12:28.631: INFO: Waiting for pod downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4 to disappear Jan 8 21:12:28.636: INFO: Pod downward-api-15e4c961-aeb2-4e08-a771-3a126a665ee4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:12:28.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4680" for this suite. • [SLOW TEST:11.106 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:12:28.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:12:28.764: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 8 21:12:31.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7624 create -f -' Jan 8 21:12:34.771: INFO: stderr: "" Jan 8 21:12:34.771: INFO: stdout: "e2e-test-crd-publish-openapi-1011-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 8 21:12:34.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7624 delete e2e-test-crd-publish-openapi-1011-crds test-cr' Jan 8 21:12:34.906: INFO: stderr: "" Jan 8 21:12:34.906: INFO: stdout: "e2e-test-crd-publish-openapi-1011-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 8 21:12:34.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7624 apply -f -' Jan 8 21:12:35.304: INFO: stderr: "" Jan 8 21:12:35.305: INFO: stdout: "e2e-test-crd-publish-openapi-1011-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 8 21:12:35.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7624 delete e2e-test-crd-publish-openapi-1011-crds test-cr' Jan 8 21:12:35.496: INFO: stderr: "" Jan 8 21:12:35.496: INFO: stdout: "e2e-test-crd-publish-openapi-1011-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 8 21:12:35.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1011-crds' Jan 8 21:12:35.840: INFO: stderr: "" Jan 8 21:12:35.840: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1011-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:12:39.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7624" for this suite. • [SLOW TEST:10.864 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":17,"skipped":383,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:12:39.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Jan 8 21:12:39.654: INFO: Waiting up to 5m0s for pod "client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69" in namespace "containers-3076" to be "success or failure" Jan 8 21:12:39.668: INFO: Pod "client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69": Phase="Pending", Reason="", readiness=false. Elapsed: 13.924146ms Jan 8 21:12:41.678: INFO: Pod "client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023762876s Jan 8 21:12:43.690: INFO: Pod "client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036065668s Jan 8 21:12:45.701: INFO: Pod "client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047239689s Jan 8 21:12:47.709: INFO: Pod "client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055073965s STEP: Saw pod success Jan 8 21:12:47.709: INFO: Pod "client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69" satisfied condition "success or failure" Jan 8 21:12:47.713: INFO: Trying to get logs from node jerma-node pod client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69 container test-container: STEP: delete the pod Jan 8 21:12:47.782: INFO: Waiting for pod client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69 to disappear Jan 8 21:12:47.870: INFO: Pod client-containers-c28fd1b4-8b7e-4322-9409-a04261c5bc69 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:12:47.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3076" for this suite. • [SLOW TEST:8.363 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:12:47.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-229093f8-a0e6-44d7-9827-4db51cbe3b0f STEP: Creating a pod to test consume configMaps Jan 8 21:12:48.088: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826" in namespace "projected-6896" to be "success or failure" Jan 8 21:12:48.106: INFO: Pod "pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826": Phase="Pending", Reason="", readiness=false. Elapsed: 18.487085ms Jan 8 21:12:50.115: INFO: Pod "pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026950526s Jan 8 21:12:52.122: INFO: Pod "pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033536715s Jan 8 21:12:54.131: INFO: Pod "pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042591205s Jan 8 21:12:56.138: INFO: Pod "pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050000656s STEP: Saw pod success Jan 8 21:12:56.138: INFO: Pod "pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826" satisfied condition "success or failure" Jan 8 21:12:56.143: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826 container projected-configmap-volume-test: STEP: delete the pod Jan 8 21:12:56.225: INFO: Waiting for pod pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826 to disappear Jan 8 21:12:56.240: INFO: Pod pod-projected-configmaps-6f193a66-71e4-4b91-aba7-521fabf69826 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:12:56.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6896" for this suite. • [SLOW TEST:8.370 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":420,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:12:56.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 8 21:12:56.380: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96a115e8-c0f3-4f61-8cd6-d9ff5bb2ca84" in namespace "downward-api-4008" to be "success or failure" Jan 8 21:12:56.388: INFO: Pod "downwardapi-volume-96a115e8-c0f3-4f61-8cd6-d9ff5bb2ca84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280918ms Jan 8 21:12:58.396: INFO: Pod "downwardapi-volume-96a115e8-c0f3-4f61-8cd6-d9ff5bb2ca84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016075005s Jan 8 21:13:00.405: INFO: Pod "downwardapi-volume-96a115e8-c0f3-4f61-8cd6-d9ff5bb2ca84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024858459s Jan 8 21:13:02.410: INFO: Pod "downwardapi-volume-96a115e8-c0f3-4f61-8cd6-d9ff5bb2ca84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029924082s STEP: Saw pod success Jan 8 21:13:02.410: INFO: Pod "downwardapi-volume-96a115e8-c0f3-4f61-8cd6-d9ff5bb2ca84" satisfied condition "success or failure" Jan 8 21:13:02.414: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-96a115e8-c0f3-4f61-8cd6-d9ff5bb2ca84 container client-container: STEP: delete the pod Jan 8 21:13:02.447: INFO: Waiting for pod downwardapi-volume-96a115e8-c0f3-4f61-8cd6-d9ff5bb2ca84 to disappear Jan 8 21:13:02.479: INFO: Pod downwardapi-volume-96a115e8-c0f3-4f61-8cd6-d9ff5bb2ca84 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:13:02.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4008" for this suite. • [SLOW TEST:6.356 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":465,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:13:02.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1650.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1650.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1650.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 8 21:13:14.878: INFO: File jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local from pod dns-1650/dns-test-92855db3-64ea-48ad-9ca2-bc60f659cfa0 contains '' instead of 'foo.example.com.' Jan 8 21:13:14.878: INFO: Lookups using dns-1650/dns-test-92855db3-64ea-48ad-9ca2-bc60f659cfa0 failed for: [jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local] Jan 8 21:13:19.896: INFO: DNS probes using dns-test-92855db3-64ea-48ad-9ca2-bc60f659cfa0 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1650.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1650.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1650.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 8 21:13:32.221: INFO: File wheezy_udp@dns-test-service-3.dns-1650.svc.cluster.local from pod dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 8 21:13:32.228: INFO: File jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local from pod dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 8 21:13:32.228: INFO: Lookups using dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 failed for: [wheezy_udp@dns-test-service-3.dns-1650.svc.cluster.local jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local] Jan 8 21:13:37.237: INFO: File wheezy_udp@dns-test-service-3.dns-1650.svc.cluster.local from pod dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 8 21:13:37.245: INFO: File jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local from pod dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 8 21:13:37.245: INFO: Lookups using dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 failed for: [wheezy_udp@dns-test-service-3.dns-1650.svc.cluster.local jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local] Jan 8 21:13:42.237: INFO: File wheezy_udp@dns-test-service-3.dns-1650.svc.cluster.local from pod dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 8 21:13:42.245: INFO: File jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local from pod dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 8 21:13:42.245: INFO: Lookups using dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 failed for: [wheezy_udp@dns-test-service-3.dns-1650.svc.cluster.local jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local] Jan 8 21:13:47.246: INFO: File jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local from pod dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 contains '' instead of 'bar.example.com.' Jan 8 21:13:47.246: INFO: Lookups using dns-1650/dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 failed for: [jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local] Jan 8 21:13:52.251: INFO: DNS probes using dns-test-2f942e3e-ba7e-4530-985c-d42ca447b2b9 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1650.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1650.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1650.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1650.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 8 21:14:04.583: INFO: DNS probes using dns-test-5337f4ee-d07c-4ae3-8cf8-0f59810aab55 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:14:04.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1650" for this suite. • [SLOW TEST:62.092 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":21,"skipped":476,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:14:04.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-gfcz STEP: Creating a pod to test atomic-volume-subpath Jan 8 21:14:04.867: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gfcz" in namespace "subpath-3386" to be "success or failure" Jan 8 21:14:04.878: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Pending", Reason="", readiness=false. Elapsed: 11.293798ms Jan 8 21:14:06.884: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01657694s Jan 8 21:14:08.905: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038486477s Jan 8 21:14:10.921: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054224405s Jan 8 21:14:12.929: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 8.062006618s Jan 8 21:14:14.935: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 10.068375367s Jan 8 21:14:16.942: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 12.075071845s Jan 8 21:14:18.951: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 14.084272567s Jan 8 21:14:20.957: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 16.090362076s Jan 8 21:14:22.964: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 18.096704679s Jan 8 21:14:24.973: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 20.10608422s Jan 8 21:14:27.066: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 22.199228241s Jan 8 21:14:29.076: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 24.208591589s Jan 8 21:14:31.081: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Running", Reason="", readiness=true. Elapsed: 26.214507222s Jan 8 21:14:33.088: INFO: Pod "pod-subpath-test-configmap-gfcz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.220737699s STEP: Saw pod success Jan 8 21:14:33.088: INFO: Pod "pod-subpath-test-configmap-gfcz" satisfied condition "success or failure" Jan 8 21:14:33.091: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-gfcz container test-container-subpath-configmap-gfcz: STEP: delete the pod Jan 8 21:14:33.176: INFO: Waiting for pod pod-subpath-test-configmap-gfcz to disappear Jan 8 21:14:33.187: INFO: Pod pod-subpath-test-configmap-gfcz no longer exists STEP: Deleting pod pod-subpath-test-configmap-gfcz Jan 8 21:14:33.187: INFO: Deleting pod "pod-subpath-test-configmap-gfcz" in namespace "subpath-3386" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:14:33.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3386" for this suite. • [SLOW TEST:28.497 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":22,"skipped":491,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:14:33.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 8 21:14:51.516: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:51.516: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:51.614922 9 log.go:172] (0xc0027de000) (0xc001c5caa0) Create stream I0108 21:14:51.615103 9 log.go:172] (0xc0027de000) (0xc001c5caa0) Stream added, broadcasting: 1 I0108 21:14:51.641726 9 log.go:172] (0xc0027de000) Reply frame received for 1 I0108 21:14:51.642058 9 log.go:172] (0xc0027de000) (0xc0018a25a0) Create stream I0108 21:14:51.642088 9 log.go:172] (0xc0027de000) (0xc0018a25a0) Stream added, broadcasting: 3 I0108 21:14:51.649108 9 log.go:172] (0xc0027de000) Reply frame received for 3 I0108 21:14:51.649163 9 log.go:172] (0xc0027de000) (0xc0018a2640) Create stream I0108 21:14:51.649176 9 log.go:172] (0xc0027de000) (0xc0018a2640) Stream added, broadcasting: 5 I0108 21:14:51.651851 9 log.go:172] (0xc0027de000) Reply frame received for 5 I0108 21:14:51.734003 9 log.go:172] (0xc0027de000) Data frame received for 3 I0108 21:14:51.734068 9 log.go:172] (0xc0018a25a0) (3) Data frame handling I0108 21:14:51.734108 9 log.go:172] (0xc0018a25a0) (3) Data frame sent I0108 21:14:51.827193 9 log.go:172] (0xc0027de000) Data frame received for 1 I0108 21:14:51.827310 9 log.go:172] (0xc001c5caa0) (1) Data frame handling I0108 21:14:51.827363 9 log.go:172] (0xc001c5caa0) (1) Data frame sent I0108 21:14:51.827644 9 log.go:172] (0xc0027de000) (0xc0018a25a0) Stream removed, broadcasting: 3 I0108 21:14:51.827923 9 log.go:172] (0xc0027de000) (0xc0018a2640) Stream removed, broadcasting: 5 I0108 21:14:51.827969 9 log.go:172] (0xc0027de000) (0xc001c5caa0) Stream removed, broadcasting: 1 I0108 21:14:51.828007 9 log.go:172] (0xc0027de000) Go away received I0108 21:14:51.829301 9 log.go:172] (0xc0027de000) (0xc001c5caa0) Stream removed, broadcasting: 1 I0108 21:14:51.829321 9 log.go:172] (0xc0027de000) (0xc0018a25a0) Stream removed, broadcasting: 3 I0108 21:14:51.829329 9 log.go:172] (0xc0027de000) (0xc0018a2640) Stream removed, broadcasting: 5 Jan 8 21:14:51.829: INFO: Exec stderr: "" Jan 8 21:14:51.829: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:51.829: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:51.879053 9 log.go:172] (0xc002a96160) (0xc0018a26e0) Create stream I0108 21:14:51.879160 9 log.go:172] (0xc002a96160) (0xc0018a26e0) Stream added, broadcasting: 1 I0108 21:14:51.889120 9 log.go:172] (0xc002a96160) Reply frame received for 1 I0108 21:14:51.889302 9 log.go:172] (0xc002a96160) (0xc0018a2780) Create stream I0108 21:14:51.889332 9 log.go:172] (0xc002a96160) (0xc0018a2780) Stream added, broadcasting: 3 I0108 21:14:51.892124 9 log.go:172] (0xc002a96160) Reply frame received for 3 I0108 21:14:51.892168 9 log.go:172] (0xc002a96160) (0xc001848000) Create stream I0108 21:14:51.892181 9 log.go:172] (0xc002a96160) (0xc001848000) Stream added, broadcasting: 5 I0108 21:14:51.894578 9 log.go:172] (0xc002a96160) Reply frame received for 5 I0108 21:14:51.971154 9 log.go:172] (0xc002a96160) Data frame received for 3 I0108 21:14:51.971246 9 log.go:172] (0xc0018a2780) (3) Data frame handling I0108 21:14:51.971274 9 log.go:172] (0xc0018a2780) (3) Data frame sent I0108 21:14:52.064425 9 log.go:172] (0xc002a96160) (0xc0018a2780) Stream removed, broadcasting: 3 I0108 21:14:52.065150 9 log.go:172] (0xc002a96160) Data frame received for 1 I0108 21:14:52.065330 9 log.go:172] (0xc002a96160) (0xc001848000) Stream removed, broadcasting: 5 I0108 21:14:52.065456 9 log.go:172] (0xc0018a26e0) (1) Data frame handling I0108 21:14:52.065500 9 log.go:172] (0xc0018a26e0) (1) Data frame sent I0108 21:14:52.065523 9 log.go:172] (0xc002a96160) (0xc0018a26e0) Stream removed, broadcasting: 1 I0108 21:14:52.065804 9 log.go:172] (0xc002a96160) Go away received I0108 21:14:52.066457 9 log.go:172] (0xc002a96160) (0xc0018a26e0) Stream removed, broadcasting: 1 I0108 21:14:52.066504 9 log.go:172] (0xc002a96160) (0xc0018a2780) Stream removed, broadcasting: 3 I0108 21:14:52.066516 9 log.go:172] (0xc002a96160) (0xc001848000) Stream removed, broadcasting: 5 Jan 8 21:14:52.066: INFO: Exec stderr: "" Jan 8 21:14:52.066: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:52.067: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:52.128671 9 log.go:172] (0xc000ac6000) (0xc001848140) Create stream I0108 21:14:52.128801 9 log.go:172] (0xc000ac6000) (0xc001848140) Stream added, broadcasting: 1 I0108 21:14:52.137152 9 log.go:172] (0xc000ac6000) Reply frame received for 1 I0108 21:14:52.137258 9 log.go:172] (0xc000ac6000) (0xc0017ac0a0) Create stream I0108 21:14:52.137276 9 log.go:172] (0xc000ac6000) (0xc0017ac0a0) Stream added, broadcasting: 3 I0108 21:14:52.139646 9 log.go:172] (0xc000ac6000) Reply frame received for 3 I0108 21:14:52.139667 9 log.go:172] (0xc000ac6000) (0xc0017ac280) Create stream I0108 21:14:52.139672 9 log.go:172] (0xc000ac6000) (0xc0017ac280) Stream added, broadcasting: 5 I0108 21:14:52.141940 9 log.go:172] (0xc000ac6000) Reply frame received for 5 I0108 21:14:52.233502 9 log.go:172] (0xc000ac6000) Data frame received for 3 I0108 21:14:52.233562 9 log.go:172] (0xc0017ac0a0) (3) Data frame handling I0108 21:14:52.233590 9 log.go:172] (0xc0017ac0a0) (3) Data frame sent I0108 21:14:52.296725 9 log.go:172] (0xc000ac6000) (0xc0017ac280) Stream removed, broadcasting: 5 I0108 21:14:52.296785 9 log.go:172] (0xc000ac6000) Data frame received for 1 I0108 21:14:52.296809 9 log.go:172] (0xc001848140) (1) Data frame handling I0108 21:14:52.296833 9 log.go:172] (0xc000ac6000) (0xc0017ac0a0) Stream removed, broadcasting: 3 I0108 21:14:52.296900 9 log.go:172] (0xc001848140) (1) Data frame sent I0108 21:14:52.296919 9 log.go:172] (0xc000ac6000) (0xc001848140) Stream removed, broadcasting: 1 I0108 21:14:52.296946 9 log.go:172] (0xc000ac6000) Go away received I0108 21:14:52.297430 9 log.go:172] (0xc000ac6000) (0xc001848140) Stream removed, broadcasting: 1 I0108 21:14:52.297450 9 log.go:172] (0xc000ac6000) (0xc0017ac0a0) Stream removed, broadcasting: 3 I0108 21:14:52.297456 9 log.go:172] (0xc000ac6000) (0xc0017ac280) Stream removed, broadcasting: 5 Jan 8 21:14:52.297: INFO: Exec stderr: "" Jan 8 21:14:52.297: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:52.297: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:52.335002 9 log.go:172] (0xc000ac6420) (0xc001848320) Create stream I0108 21:14:52.335063 9 log.go:172] (0xc000ac6420) (0xc001848320) Stream added, broadcasting: 1 I0108 21:14:52.341327 9 log.go:172] (0xc000ac6420) Reply frame received for 1 I0108 21:14:52.341401 9 log.go:172] (0xc000ac6420) (0xc0018483c0) Create stream I0108 21:14:52.341410 9 log.go:172] (0xc000ac6420) (0xc0018483c0) Stream added, broadcasting: 3 I0108 21:14:52.342233 9 log.go:172] (0xc000ac6420) Reply frame received for 3 I0108 21:14:52.342253 9 log.go:172] (0xc000ac6420) (0xc001848460) Create stream I0108 21:14:52.342259 9 log.go:172] (0xc000ac6420) (0xc001848460) Stream added, broadcasting: 5 I0108 21:14:52.343501 9 log.go:172] (0xc000ac6420) Reply frame received for 5 I0108 21:14:52.400626 9 log.go:172] (0xc000ac6420) Data frame received for 3 I0108 21:14:52.400701 9 log.go:172] (0xc0018483c0) (3) Data frame handling I0108 21:14:52.400722 9 log.go:172] (0xc0018483c0) (3) Data frame sent I0108 21:14:52.498329 9 log.go:172] (0xc000ac6420) Data frame received for 1 I0108 21:14:52.498647 9 log.go:172] (0xc000ac6420) (0xc0018483c0) Stream removed, broadcasting: 3 I0108 21:14:52.498773 9 log.go:172] (0xc001848320) (1) Data frame handling I0108 21:14:52.498800 9 log.go:172] (0xc001848320) (1) Data frame sent I0108 21:14:52.498982 9 log.go:172] (0xc000ac6420) (0xc001848320) Stream removed, broadcasting: 1 I0108 21:14:52.499131 9 log.go:172] (0xc000ac6420) (0xc001848460) Stream removed, broadcasting: 5 I0108 21:14:52.499219 9 log.go:172] (0xc000ac6420) Go away received I0108 21:14:52.499466 9 log.go:172] (0xc000ac6420) (0xc001848320) Stream removed, broadcasting: 1 I0108 21:14:52.499485 9 log.go:172] (0xc000ac6420) (0xc0018483c0) Stream removed, broadcasting: 3 I0108 21:14:52.499502 9 log.go:172] (0xc000ac6420) (0xc001848460) Stream removed, broadcasting: 5 Jan 8 21:14:52.499: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 8 21:14:52.499: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:52.499: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:52.611760 9 log.go:172] (0xc002a96790) (0xc0018a2b40) Create stream I0108 21:14:52.611924 9 log.go:172] (0xc002a96790) (0xc0018a2b40) Stream added, broadcasting: 1 I0108 21:14:52.621451 9 log.go:172] (0xc002a96790) Reply frame received for 1 I0108 21:14:52.621548 9 log.go:172] (0xc002a96790) (0xc0017ac320) Create stream I0108 21:14:52.621565 9 log.go:172] (0xc002a96790) (0xc0017ac320) Stream added, broadcasting: 3 I0108 21:14:52.623241 9 log.go:172] (0xc002a96790) Reply frame received for 3 I0108 21:14:52.623271 9 log.go:172] (0xc002a96790) (0xc0017ac460) Create stream I0108 21:14:52.623279 9 log.go:172] (0xc002a96790) (0xc0017ac460) Stream added, broadcasting: 5 I0108 21:14:52.624525 9 log.go:172] (0xc002a96790) Reply frame received for 5 I0108 21:14:52.705570 9 log.go:172] (0xc002a96790) Data frame received for 3 I0108 21:14:52.705620 9 log.go:172] (0xc0017ac320) (3) Data frame handling I0108 21:14:52.705636 9 log.go:172] (0xc0017ac320) (3) Data frame sent I0108 21:14:52.768021 9 log.go:172] (0xc002a96790) Data frame received for 1 I0108 21:14:52.768059 9 log.go:172] (0xc0018a2b40) (1) Data frame handling I0108 21:14:52.768076 9 log.go:172] (0xc0018a2b40) (1) Data frame sent I0108 21:14:52.768307 9 log.go:172] (0xc002a96790) (0xc0018a2b40) Stream removed, broadcasting: 1 I0108 21:14:52.768635 9 log.go:172] (0xc002a96790) (0xc0017ac320) Stream removed, broadcasting: 3 I0108 21:14:52.768949 9 log.go:172] (0xc002a96790) (0xc0017ac460) Stream removed, broadcasting: 5 I0108 21:14:52.769016 9 log.go:172] (0xc002a96790) (0xc0018a2b40) Stream removed, broadcasting: 1 I0108 21:14:52.769025 9 log.go:172] (0xc002a96790) (0xc0017ac320) Stream removed, broadcasting: 3 I0108 21:14:52.769034 9 log.go:172] (0xc002a96790) (0xc0017ac460) Stream removed, broadcasting: 5 I0108 21:14:52.769105 9 log.go:172] (0xc002a96790) Go away received Jan 8 21:14:52.769: INFO: Exec stderr: "" Jan 8 21:14:52.769: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:52.769: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:52.808250 9 log.go:172] (0xc0016be000) (0xc001ccc8c0) Create stream I0108 21:14:52.808346 9 log.go:172] (0xc0016be000) (0xc001ccc8c0) Stream added, broadcasting: 1 I0108 21:14:52.814603 9 log.go:172] (0xc0016be000) Reply frame received for 1 I0108 21:14:52.814642 9 log.go:172] (0xc0016be000) (0xc001cccaa0) Create stream I0108 21:14:52.814653 9 log.go:172] (0xc0016be000) (0xc001cccaa0) Stream added, broadcasting: 3 I0108 21:14:52.815502 9 log.go:172] (0xc0016be000) Reply frame received for 3 I0108 21:14:52.815522 9 log.go:172] (0xc0016be000) (0xc0013886e0) Create stream I0108 21:14:52.815532 9 log.go:172] (0xc0016be000) (0xc0013886e0) Stream added, broadcasting: 5 I0108 21:14:52.816427 9 log.go:172] (0xc0016be000) Reply frame received for 5 I0108 21:14:52.878689 9 log.go:172] (0xc0016be000) Data frame received for 3 I0108 21:14:52.878790 9 log.go:172] (0xc001cccaa0) (3) Data frame handling I0108 21:14:52.878814 9 log.go:172] (0xc001cccaa0) (3) Data frame sent I0108 21:14:52.957694 9 log.go:172] (0xc0016be000) Data frame received for 1 I0108 21:14:52.957784 9 log.go:172] (0xc0016be000) (0xc001cccaa0) Stream removed, broadcasting: 3 I0108 21:14:52.957837 9 log.go:172] (0xc001ccc8c0) (1) Data frame handling I0108 21:14:52.957868 9 log.go:172] (0xc001ccc8c0) (1) Data frame sent I0108 21:14:52.957905 9 log.go:172] (0xc0016be000) (0xc0013886e0) Stream removed, broadcasting: 5 I0108 21:14:52.957965 9 log.go:172] (0xc0016be000) (0xc001ccc8c0) Stream removed, broadcasting: 1 I0108 21:14:52.957991 9 log.go:172] (0xc0016be000) Go away received I0108 21:14:52.958309 9 log.go:172] (0xc0016be000) (0xc001ccc8c0) Stream removed, broadcasting: 1 I0108 21:14:52.958320 9 log.go:172] (0xc0016be000) (0xc001cccaa0) Stream removed, broadcasting: 3 I0108 21:14:52.958328 9 log.go:172] (0xc0016be000) (0xc0013886e0) Stream removed, broadcasting: 5 Jan 8 21:14:52.958: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 8 21:14:52.958: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:52.958: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:52.996020 9 log.go:172] (0xc002ad5ce0) (0xc0017ac640) Create stream I0108 21:14:52.996079 9 log.go:172] (0xc002ad5ce0) (0xc0017ac640) Stream added, broadcasting: 1 I0108 21:14:53.000690 9 log.go:172] (0xc002ad5ce0) Reply frame received for 1 I0108 21:14:53.000720 9 log.go:172] (0xc002ad5ce0) (0xc0018a2be0) Create stream I0108 21:14:53.000731 9 log.go:172] (0xc002ad5ce0) (0xc0018a2be0) Stream added, broadcasting: 3 I0108 21:14:53.001873 9 log.go:172] (0xc002ad5ce0) Reply frame received for 3 I0108 21:14:53.001898 9 log.go:172] (0xc002ad5ce0) (0xc0017ac6e0) Create stream I0108 21:14:53.001907 9 log.go:172] (0xc002ad5ce0) (0xc0017ac6e0) Stream added, broadcasting: 5 I0108 21:14:53.002903 9 log.go:172] (0xc002ad5ce0) Reply frame received for 5 I0108 21:14:53.079196 9 log.go:172] (0xc002ad5ce0) Data frame received for 3 I0108 21:14:53.079298 9 log.go:172] (0xc0018a2be0) (3) Data frame handling I0108 21:14:53.079348 9 log.go:172] (0xc0018a2be0) (3) Data frame sent I0108 21:14:53.152948 9 log.go:172] (0xc002ad5ce0) Data frame received for 1 I0108 21:14:53.153085 9 log.go:172] (0xc002ad5ce0) (0xc0018a2be0) Stream removed, broadcasting: 3 I0108 21:14:53.153216 9 log.go:172] (0xc0017ac640) (1) Data frame handling I0108 21:14:53.153264 9 log.go:172] (0xc0017ac640) (1) Data frame sent I0108 21:14:53.153305 9 log.go:172] (0xc002ad5ce0) (0xc0017ac640) Stream removed, broadcasting: 1 I0108 21:14:53.153379 9 log.go:172] (0xc002ad5ce0) (0xc0017ac6e0) Stream removed, broadcasting: 5 I0108 21:14:53.153485 9 log.go:172] (0xc002ad5ce0) Go away received I0108 21:14:53.153838 9 log.go:172] (0xc002ad5ce0) (0xc0017ac640) Stream removed, broadcasting: 1 I0108 21:14:53.153927 9 log.go:172] (0xc002ad5ce0) (0xc0018a2be0) Stream removed, broadcasting: 3 I0108 21:14:53.153950 9 log.go:172] (0xc002ad5ce0) (0xc0017ac6e0) Stream removed, broadcasting: 5 Jan 8 21:14:53.154: INFO: Exec stderr: "" Jan 8 21:14:53.154: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:53.154: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:53.197416 9 log.go:172] (0xc001764370) (0xc0017aca00) Create stream I0108 21:14:53.197595 9 log.go:172] (0xc001764370) (0xc0017aca00) Stream added, broadcasting: 1 I0108 21:14:53.209611 9 log.go:172] (0xc001764370) Reply frame received for 1 I0108 21:14:53.209820 9 log.go:172] (0xc001764370) (0xc001848500) Create stream I0108 21:14:53.209856 9 log.go:172] (0xc001764370) (0xc001848500) Stream added, broadcasting: 3 I0108 21:14:53.211342 9 log.go:172] (0xc001764370) Reply frame received for 3 I0108 21:14:53.211367 9 log.go:172] (0xc001764370) (0xc001848780) Create stream I0108 21:14:53.211374 9 log.go:172] (0xc001764370) (0xc001848780) Stream added, broadcasting: 5 I0108 21:14:53.212500 9 log.go:172] (0xc001764370) Reply frame received for 5 I0108 21:14:53.278246 9 log.go:172] (0xc001764370) Data frame received for 3 I0108 21:14:53.278294 9 log.go:172] (0xc001848500) (3) Data frame handling I0108 21:14:53.278315 9 log.go:172] (0xc001848500) (3) Data frame sent I0108 21:14:53.340779 9 log.go:172] (0xc001764370) (0xc001848500) Stream removed, broadcasting: 3 I0108 21:14:53.340874 9 log.go:172] (0xc001764370) Data frame received for 1 I0108 21:14:53.341102 9 log.go:172] (0xc0017aca00) (1) Data frame handling I0108 21:14:53.341147 9 log.go:172] (0xc0017aca00) (1) Data frame sent I0108 21:14:53.341431 9 log.go:172] (0xc001764370) (0xc001848780) Stream removed, broadcasting: 5 I0108 21:14:53.341471 9 log.go:172] (0xc001764370) (0xc0017aca00) Stream removed, broadcasting: 1 I0108 21:14:53.341483 9 log.go:172] (0xc001764370) Go away received I0108 21:14:53.341886 9 log.go:172] (0xc001764370) (0xc0017aca00) Stream removed, broadcasting: 1 I0108 21:14:53.341900 9 log.go:172] (0xc001764370) (0xc001848500) Stream removed, broadcasting: 3 I0108 21:14:53.341906 9 log.go:172] (0xc001764370) (0xc001848780) Stream removed, broadcasting: 5 Jan 8 21:14:53.341: INFO: Exec stderr: "" Jan 8 21:14:53.341: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:53.342: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:53.383364 9 log.go:172] (0xc002a96a50) (0xc0018a2c80) Create stream I0108 21:14:53.383504 9 log.go:172] (0xc002a96a50) (0xc0018a2c80) Stream added, broadcasting: 1 I0108 21:14:53.387629 9 log.go:172] (0xc002a96a50) Reply frame received for 1 I0108 21:14:53.387776 9 log.go:172] (0xc002a96a50) (0xc001388d20) Create stream I0108 21:14:53.387801 9 log.go:172] (0xc002a96a50) (0xc001388d20) Stream added, broadcasting: 3 I0108 21:14:53.390385 9 log.go:172] (0xc002a96a50) Reply frame received for 3 I0108 21:14:53.390433 9 log.go:172] (0xc002a96a50) (0xc0018a2d20) Create stream I0108 21:14:53.390450 9 log.go:172] (0xc002a96a50) (0xc0018a2d20) Stream added, broadcasting: 5 I0108 21:14:53.391715 9 log.go:172] (0xc002a96a50) Reply frame received for 5 I0108 21:14:53.463316 9 log.go:172] (0xc002a96a50) Data frame received for 3 I0108 21:14:53.463382 9 log.go:172] (0xc001388d20) (3) Data frame handling I0108 21:14:53.463420 9 log.go:172] (0xc001388d20) (3) Data frame sent I0108 21:14:53.538636 9 log.go:172] (0xc002a96a50) Data frame received for 1 I0108 21:14:53.538947 9 log.go:172] (0xc002a96a50) (0xc0018a2d20) Stream removed, broadcasting: 5 I0108 21:14:53.539110 9 log.go:172] (0xc0018a2c80) (1) Data frame handling I0108 21:14:53.539187 9 log.go:172] (0xc002a96a50) (0xc001388d20) Stream removed, broadcasting: 3 I0108 21:14:53.539406 9 log.go:172] (0xc0018a2c80) (1) Data frame sent I0108 21:14:53.539454 9 log.go:172] (0xc002a96a50) (0xc0018a2c80) Stream removed, broadcasting: 1 I0108 21:14:53.539492 9 log.go:172] (0xc002a96a50) Go away received I0108 21:14:53.540025 9 log.go:172] (0xc002a96a50) (0xc0018a2c80) Stream removed, broadcasting: 1 I0108 21:14:53.540043 9 log.go:172] (0xc002a96a50) (0xc001388d20) Stream removed, broadcasting: 3 I0108 21:14:53.540056 9 log.go:172] (0xc002a96a50) (0xc0018a2d20) Stream removed, broadcasting: 5 Jan 8 21:14:53.540: INFO: Exec stderr: "" Jan 8 21:14:53.540: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4310 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:14:53.540: INFO: >>> kubeConfig: /root/.kube/config I0108 21:14:53.592728 9 log.go:172] (0xc000ac6b00) (0xc001848aa0) Create stream I0108 21:14:53.592808 9 log.go:172] (0xc000ac6b00) (0xc001848aa0) Stream added, broadcasting: 1 I0108 21:14:53.600396 9 log.go:172] (0xc000ac6b00) Reply frame received for 1 I0108 21:14:53.600470 9 log.go:172] (0xc000ac6b00) (0xc0017acbe0) Create stream I0108 21:14:53.600494 9 log.go:172] (0xc000ac6b00) (0xc0017acbe0) Stream added, broadcasting: 3 I0108 21:14:53.602579 9 log.go:172] (0xc000ac6b00) Reply frame received for 3 I0108 21:14:53.602694 9 log.go:172] (0xc000ac6b00) (0xc001848b40) Create stream I0108 21:14:53.602714 9 log.go:172] (0xc000ac6b00) (0xc001848b40) Stream added, broadcasting: 5 I0108 21:14:53.604086 9 log.go:172] (0xc000ac6b00) Reply frame received for 5 I0108 21:14:53.670597 9 log.go:172] (0xc000ac6b00) Data frame received for 3 I0108 21:14:53.670653 9 log.go:172] (0xc0017acbe0) (3) Data frame handling I0108 21:14:53.670670 9 log.go:172] (0xc0017acbe0) (3) Data frame sent I0108 21:14:53.747274 9 log.go:172] (0xc000ac6b00) Data frame received for 1 I0108 21:14:53.747413 9 log.go:172] (0xc000ac6b00) (0xc001848b40) Stream removed, broadcasting: 5 I0108 21:14:53.747512 9 log.go:172] (0xc001848aa0) (1) Data frame handling I0108 21:14:53.747534 9 log.go:172] (0xc001848aa0) (1) Data frame sent I0108 21:14:53.747563 9 log.go:172] (0xc000ac6b00) (0xc0017acbe0) Stream removed, broadcasting: 3 I0108 21:14:53.747591 9 log.go:172] (0xc000ac6b00) (0xc001848aa0) Stream removed, broadcasting: 1 I0108 21:14:53.747606 9 log.go:172] (0xc000ac6b00) Go away received I0108 21:14:53.748349 9 log.go:172] (0xc000ac6b00) (0xc001848aa0) Stream removed, broadcasting: 1 I0108 21:14:53.748377 9 log.go:172] (0xc000ac6b00) (0xc0017acbe0) Stream removed, broadcasting: 3 I0108 21:14:53.748389 9 log.go:172] (0xc000ac6b00) (0xc001848b40) Stream removed, broadcasting: 5 Jan 8 21:14:53.748: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:14:53.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4310" for this suite. • [SLOW TEST:20.555 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":509,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:14:53.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 8 21:14:53.931: INFO: Waiting up to 5m0s for pod "downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795" in namespace "downward-api-3283" to be "success or failure" Jan 8 21:14:53.941: INFO: Pod "downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795": Phase="Pending", Reason="", readiness=false. Elapsed: 9.815336ms Jan 8 21:14:55.954: INFO: Pod "downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02278526s Jan 8 21:14:58.269: INFO: Pod "downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337418302s Jan 8 21:15:00.343: INFO: Pod "downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412008975s Jan 8 21:15:02.350: INFO: Pod "downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.418486124s STEP: Saw pod success Jan 8 21:15:02.350: INFO: Pod "downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795" satisfied condition "success or failure" Jan 8 21:15:02.354: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795 container dapi-container: STEP: delete the pod Jan 8 21:15:02.409: INFO: Waiting for pod downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795 to disappear Jan 8 21:15:02.512: INFO: Pod downward-api-4fda00e6-fd00-4233-8b6b-4484ff210795 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:15:02.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3283" for this suite. • [SLOW TEST:8.803 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":523,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:15:02.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 8 21:15:04.176: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 8 21:15:06.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:15:08.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:15:10.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114904, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 8 21:15:13.238: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:15:13.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4553-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:15:14.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2512" for this suite. STEP: Destroying namespace "webhook-2512-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.265 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":25,"skipped":523,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:15:14.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jan 8 21:15:16.370: INFO: created pod pod-service-account-defaultsa Jan 8 21:15:16.370: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 8 21:15:16.398: INFO: created pod pod-service-account-mountsa Jan 8 21:15:16.398: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 8 21:15:16.420: INFO: created pod pod-service-account-nomountsa Jan 8 21:15:16.420: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 8 21:15:16.522: INFO: created pod pod-service-account-defaultsa-mountspec Jan 8 21:15:16.522: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 8 21:15:16.547: INFO: created pod pod-service-account-mountsa-mountspec Jan 8 21:15:16.547: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 8 21:15:16.602: INFO: created pod pod-service-account-nomountsa-mountspec Jan 8 21:15:16.602: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 8 21:15:16.743: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 8 21:15:16.744: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 8 21:15:16.803: INFO: created pod pod-service-account-mountsa-nomountspec Jan 8 21:15:16.803: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 8 21:15:18.117: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 8 21:15:18.117: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:15:18.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5956" for this suite. • [SLOW TEST:5.968 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":26,"skipped":534,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:15:20.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 8 21:16:01.038: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 8 21:16:01.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 8 21:16:03.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 8 21:16:03.079: INFO: Pod pod-with-poststart-exec-hook still exists Jan 8 21:16:05.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 8 21:16:05.106: INFO: Pod pod-with-poststart-exec-hook still exists Jan 8 21:16:07.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 8 21:16:07.085: INFO: Pod pod-with-poststart-exec-hook still exists Jan 8 21:16:09.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 8 21:16:09.080: INFO: Pod pod-with-poststart-exec-hook still exists Jan 8 21:16:11.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 8 21:16:11.081: INFO: Pod pod-with-poststart-exec-hook still exists Jan 8 21:16:13.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 8 21:16:13.085: INFO: Pod pod-with-poststart-exec-hook still exists Jan 8 21:16:15.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 8 21:16:15.131: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:16:15.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3748" for this suite. • [SLOW TEST:54.345 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":539,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:16:15.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 8 21:16:15.961: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 8 21:16:17.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114976, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114975, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:16:19.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114976, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114975, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:16:21.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114976, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714114975, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 8 21:16:25.029: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:16:25.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:16:26.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7897" for this suite. STEP: Destroying namespace "webhook-7897-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.275 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":28,"skipped":539,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:16:26.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:16:26.571: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:16:34.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8460" for this suite. • [SLOW TEST:8.370 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":551,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:16:34.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Jan 8 21:16:34.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7979 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 8 21:16:43.073: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0108 21:16:41.945735 266 log.go:172] (0xc000116fd0) (0xc00096a320) Create stream\nI0108 21:16:41.946061 266 log.go:172] (0xc000116fd0) (0xc00096a320) Stream added, broadcasting: 1\nI0108 21:16:41.957144 266 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0108 21:16:41.957252 266 log.go:172] (0xc000116fd0) (0xc0004f92c0) Create stream\nI0108 21:16:41.957271 266 log.go:172] (0xc000116fd0) (0xc0004f92c0) Stream added, broadcasting: 3\nI0108 21:16:41.958907 266 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0108 21:16:41.958969 266 log.go:172] (0xc000116fd0) (0xc000968000) Create stream\nI0108 21:16:41.959009 266 log.go:172] (0xc000116fd0) (0xc000968000) Stream added, broadcasting: 5\nI0108 21:16:41.960593 266 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0108 21:16:41.960667 266 log.go:172] (0xc000116fd0) (0xc00096a3c0) Create stream\nI0108 21:16:41.960695 266 log.go:172] (0xc000116fd0) (0xc00096a3c0) Stream added, broadcasting: 7\nI0108 21:16:41.962055 266 log.go:172] (0xc000116fd0) Reply frame received for 7\nI0108 21:16:41.962711 266 log.go:172] (0xc0004f92c0) (3) Writing data frame\nI0108 21:16:41.963055 266 log.go:172] (0xc0004f92c0) (3) Writing data frame\nI0108 21:16:41.966873 266 log.go:172] (0xc000116fd0) Data frame received for 5\nI0108 21:16:41.966898 266 log.go:172] (0xc000968000) (5) Data frame handling\nI0108 21:16:41.966948 266 log.go:172] (0xc000968000) (5) Data frame sent\nI0108 21:16:41.971001 266 log.go:172] (0xc000116fd0) Data frame received for 5\nI0108 21:16:41.971019 266 log.go:172] (0xc000968000) (5) Data frame handling\nI0108 21:16:41.971031 266 log.go:172] (0xc000968000) (5) Data frame sent\nI0108 21:16:43.004832 266 log.go:172] (0xc000116fd0) Data frame received for 1\nI0108 21:16:43.005176 266 log.go:172] (0xc000116fd0) (0xc0004f92c0) Stream removed, broadcasting: 3\nI0108 21:16:43.005263 266 log.go:172] (0xc00096a320) (1) Data frame handling\nI0108 21:16:43.005298 266 log.go:172] (0xc00096a320) (1) Data frame sent\nI0108 21:16:43.005312 266 log.go:172] (0xc000116fd0) (0xc00096a320) Stream removed, broadcasting: 1\nI0108 21:16:43.005367 266 log.go:172] (0xc000116fd0) (0xc000968000) Stream removed, broadcasting: 5\nI0108 21:16:43.005747 266 log.go:172] (0xc000116fd0) (0xc00096a3c0) Stream removed, broadcasting: 7\nI0108 21:16:43.005788 266 log.go:172] (0xc000116fd0) Go away received\nI0108 21:16:43.006767 266 log.go:172] (0xc000116fd0) (0xc00096a320) Stream removed, broadcasting: 1\nI0108 21:16:43.006854 266 log.go:172] (0xc000116fd0) (0xc0004f92c0) Stream removed, broadcasting: 3\nI0108 21:16:43.006870 266 log.go:172] (0xc000116fd0) (0xc000968000) Stream removed, broadcasting: 5\nI0108 21:16:43.006884 266 log.go:172] (0xc000116fd0) (0xc00096a3c0) Stream removed, broadcasting: 7\n" Jan 8 21:16:43.074: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:16:45.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7979" for this suite. • [SLOW TEST:10.343 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":30,"skipped":554,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:16:45.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:16:45.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2110" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":31,"skipped":560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:16:45.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 8 21:16:45.394: INFO: Waiting up to 5m0s for pod "pod-0d3c1708-e8dc-41da-8083-a299381fa213" in namespace "emptydir-3180" to be "success or failure" Jan 8 21:16:45.408: INFO: Pod "pod-0d3c1708-e8dc-41da-8083-a299381fa213": Phase="Pending", Reason="", readiness=false. Elapsed: 13.928ms Jan 8 21:16:47.413: INFO: Pod "pod-0d3c1708-e8dc-41da-8083-a299381fa213": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018951366s Jan 8 21:16:49.422: INFO: Pod "pod-0d3c1708-e8dc-41da-8083-a299381fa213": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028506671s Jan 8 21:16:51.539: INFO: Pod "pod-0d3c1708-e8dc-41da-8083-a299381fa213": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144726119s Jan 8 21:16:53.548: INFO: Pod "pod-0d3c1708-e8dc-41da-8083-a299381fa213": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154105141s STEP: Saw pod success Jan 8 21:16:53.548: INFO: Pod "pod-0d3c1708-e8dc-41da-8083-a299381fa213" satisfied condition "success or failure" Jan 8 21:16:53.552: INFO: Trying to get logs from node jerma-node pod pod-0d3c1708-e8dc-41da-8083-a299381fa213 container test-container: STEP: delete the pod Jan 8 21:16:53.616: INFO: Waiting for pod pod-0d3c1708-e8dc-41da-8083-a299381fa213 to disappear Jan 8 21:16:53.629: INFO: Pod pod-0d3c1708-e8dc-41da-8083-a299381fa213 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:16:53.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3180" for this suite. • [SLOW TEST:8.400 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":589,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:16:53.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 8 21:16:53.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5188' Jan 8 21:16:54.160: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 8 21:16:54.160: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Jan 8 21:16:58.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5188' Jan 8 21:16:58.399: INFO: stderr: "" Jan 8 21:16:58.399: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:16:58.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5188" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":33,"skipped":590,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:16:58.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 8 21:16:58.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8426' Jan 8 21:16:58.729: INFO: stderr: "" Jan 8 21:16:58.729: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 Jan 8 21:16:58.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8426' Jan 8 21:17:06.394: INFO: stderr: "" Jan 8 21:17:06.394: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:17:06.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8426" for this suite. • [SLOW TEST:8.038 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":34,"skipped":590,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:17:06.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jan 8 21:17:06.577: INFO: Waiting up to 5m0s for pod "var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830" in namespace "var-expansion-3802" to be "success or failure" Jan 8 21:17:06.584: INFO: Pod "var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830": Phase="Pending", Reason="", readiness=false. Elapsed: 6.855429ms Jan 8 21:17:08.594: INFO: Pod "var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016972571s Jan 8 21:17:10.604: INFO: Pod "var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026989822s Jan 8 21:17:12.609: INFO: Pod "var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032031932s Jan 8 21:17:14.622: INFO: Pod "var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044534465s Jan 8 21:17:16.628: INFO: Pod "var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050932178s STEP: Saw pod success Jan 8 21:17:16.628: INFO: Pod "var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830" satisfied condition "success or failure" Jan 8 21:17:16.631: INFO: Trying to get logs from node jerma-node pod var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830 container dapi-container: STEP: delete the pod Jan 8 21:17:16.710: INFO: Waiting for pod var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830 to disappear Jan 8 21:17:16.738: INFO: Pod var-expansion-0de73c1a-a9bb-4137-8c22-ee1fe514a830 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:17:16.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3802" for this suite. • [SLOW TEST:10.292 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":608,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:17:16.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-2dc4be9c-ae2f-4c83-b02d-6874f9bbfe22 in namespace container-probe-4137 Jan 8 21:17:22.987: INFO: Started pod liveness-2dc4be9c-ae2f-4c83-b02d-6874f9bbfe22 in namespace container-probe-4137 STEP: checking the pod's current state and verifying that restartCount is present Jan 8 21:17:22.990: INFO: Initial restart count of pod liveness-2dc4be9c-ae2f-4c83-b02d-6874f9bbfe22 is 0 Jan 8 21:17:43.088: INFO: Restart count of pod container-probe-4137/liveness-2dc4be9c-ae2f-4c83-b02d-6874f9bbfe22 is now 1 (20.098489121s elapsed) Jan 8 21:18:03.224: INFO: Restart count of pod container-probe-4137/liveness-2dc4be9c-ae2f-4c83-b02d-6874f9bbfe22 is now 2 (40.233821918s elapsed) Jan 8 21:18:23.339: INFO: Restart count of pod container-probe-4137/liveness-2dc4be9c-ae2f-4c83-b02d-6874f9bbfe22 is now 3 (1m0.349369128s elapsed) Jan 8 21:18:43.431: INFO: Restart count of pod container-probe-4137/liveness-2dc4be9c-ae2f-4c83-b02d-6874f9bbfe22 is now 4 (1m20.441198755s elapsed) Jan 8 21:19:45.670: INFO: Restart count of pod container-probe-4137/liveness-2dc4be9c-ae2f-4c83-b02d-6874f9bbfe22 is now 5 (2m22.680033091s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:19:45.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4137" for this suite. • [SLOW TEST:148.967 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:19:45.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-120 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-120 to expose endpoints map[] Jan 8 21:19:46.014: INFO: Get endpoints failed (11.205822ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 8 21:19:47.023: INFO: successfully validated that service multi-endpoint-test in namespace services-120 exposes endpoints map[] (1.020152365s elapsed) STEP: Creating pod pod1 in namespace services-120 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-120 to expose endpoints map[pod1:[100]] Jan 8 21:19:51.200: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.092393489s elapsed, will retry) Jan 8 21:19:54.224: INFO: successfully validated that service multi-endpoint-test in namespace services-120 exposes endpoints map[pod1:[100]] (7.117149799s elapsed) STEP: Creating pod pod2 in namespace services-120 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-120 to expose endpoints map[pod1:[100] pod2:[101]] Jan 8 21:19:58.616: INFO: Unexpected endpoints: found map[b928f8c1-2dc0-47c1-9196-4a5027496e54:[100]], expected map[pod1:[100] pod2:[101]] (4.379661301s elapsed, will retry) Jan 8 21:20:01.699: INFO: successfully validated that service multi-endpoint-test in namespace services-120 exposes endpoints map[pod1:[100] pod2:[101]] (7.462271751s elapsed) STEP: Deleting pod pod1 in namespace services-120 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-120 to expose endpoints map[pod2:[101]] Jan 8 21:20:01.768: INFO: successfully validated that service multi-endpoint-test in namespace services-120 exposes endpoints map[pod2:[101]] (61.093892ms elapsed) STEP: Deleting pod pod2 in namespace services-120 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-120 to expose endpoints map[] Jan 8 21:20:01.848: INFO: successfully validated that service multi-endpoint-test in namespace services-120 exposes endpoints map[] (50.803208ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:20:01.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-120" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.181 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":37,"skipped":659,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:20:01.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:20:18.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3970" for this suite. • [SLOW TEST:16.961 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":38,"skipped":673,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:20:18.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1347 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jan 8 21:20:19.030: INFO: Found 0 stateful pods, waiting for 3 Jan 8 21:20:29.036: INFO: Found 2 stateful pods, waiting for 3 Jan 8 21:20:39.042: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 8 21:20:39.042: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 8 21:20:39.042: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 8 21:20:49.039: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 8 21:20:49.039: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 8 21:20:49.039: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 8 21:20:49.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1347 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 8 21:20:49.519: INFO: stderr: "I0108 21:20:49.327436 365 log.go:172] (0xc000afb8c0) (0xc000af6780) Create stream\nI0108 21:20:49.327751 365 log.go:172] (0xc000afb8c0) (0xc000af6780) Stream added, broadcasting: 1\nI0108 21:20:49.338186 365 log.go:172] (0xc000afb8c0) Reply frame received for 1\nI0108 21:20:49.338261 365 log.go:172] (0xc000afb8c0) (0xc0006705a0) Create stream\nI0108 21:20:49.338278 365 log.go:172] (0xc000afb8c0) (0xc0006705a0) Stream added, broadcasting: 3\nI0108 21:20:49.339325 365 log.go:172] (0xc000afb8c0) Reply frame received for 3\nI0108 21:20:49.339350 365 log.go:172] (0xc000afb8c0) (0xc000413360) Create stream\nI0108 21:20:49.339358 365 log.go:172] (0xc000afb8c0) (0xc000413360) Stream added, broadcasting: 5\nI0108 21:20:49.340318 365 log.go:172] (0xc000afb8c0) Reply frame received for 5\nI0108 21:20:49.415702 365 log.go:172] (0xc000afb8c0) Data frame received for 5\nI0108 21:20:49.415759 365 log.go:172] (0xc000413360) (5) Data frame handling\nI0108 21:20:49.415776 365 log.go:172] (0xc000413360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:20:49.441324 365 log.go:172] (0xc000afb8c0) Data frame received for 3\nI0108 21:20:49.441378 365 log.go:172] (0xc0006705a0) (3) Data frame handling\nI0108 21:20:49.441396 365 log.go:172] (0xc0006705a0) (3) Data frame sent\nI0108 21:20:49.507919 365 log.go:172] (0xc000afb8c0) Data frame received for 1\nI0108 21:20:49.507986 365 log.go:172] (0xc000afb8c0) (0xc0006705a0) Stream removed, broadcasting: 3\nI0108 21:20:49.508074 365 log.go:172] (0xc000af6780) (1) Data frame handling\nI0108 21:20:49.508111 365 log.go:172] (0xc000af6780) (1) Data frame sent\nI0108 21:20:49.508147 365 log.go:172] (0xc000afb8c0) (0xc000413360) Stream removed, broadcasting: 5\nI0108 21:20:49.508202 365 log.go:172] (0xc000afb8c0) (0xc000af6780) Stream removed, broadcasting: 1\nI0108 21:20:49.508231 365 log.go:172] (0xc000afb8c0) Go away received\nI0108 21:20:49.509186 365 log.go:172] (0xc000afb8c0) (0xc000af6780) Stream removed, broadcasting: 1\nI0108 21:20:49.509201 365 log.go:172] (0xc000afb8c0) (0xc0006705a0) Stream removed, broadcasting: 3\nI0108 21:20:49.509209 365 log.go:172] (0xc000afb8c0) (0xc000413360) Stream removed, broadcasting: 5\n" Jan 8 21:20:49.519: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 8 21:20:49.519: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 8 21:20:59.564: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 8 21:21:09.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1347 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 8 21:21:10.017: INFO: stderr: "I0108 21:21:09.823344 385 log.go:172] (0xc000227130) (0xc000641d60) Create stream\nI0108 21:21:09.823701 385 log.go:172] (0xc000227130) (0xc000641d60) Stream added, broadcasting: 1\nI0108 21:21:09.827079 385 log.go:172] (0xc000227130) Reply frame received for 1\nI0108 21:21:09.827124 385 log.go:172] (0xc000227130) (0xc000616780) Create stream\nI0108 21:21:09.827136 385 log.go:172] (0xc000227130) (0xc000616780) Stream added, broadcasting: 3\nI0108 21:21:09.828450 385 log.go:172] (0xc000227130) Reply frame received for 3\nI0108 21:21:09.828474 385 log.go:172] (0xc000227130) (0xc0006f0000) Create stream\nI0108 21:21:09.828483 385 log.go:172] (0xc000227130) (0xc0006f0000) Stream added, broadcasting: 5\nI0108 21:21:09.829712 385 log.go:172] (0xc000227130) Reply frame received for 5\nI0108 21:21:09.913302 385 log.go:172] (0xc000227130) Data frame received for 5\nI0108 21:21:09.913580 385 log.go:172] (0xc0006f0000) (5) Data frame handling\nI0108 21:21:09.913639 385 log.go:172] (0xc0006f0000) (5) Data frame sent\nI0108 21:21:09.914069 385 log.go:172] (0xc000227130) Data frame received for 3\nI0108 21:21:09.914142 385 log.go:172] (0xc000616780) (3) Data frame handling\nI0108 21:21:09.914202 385 log.go:172] (0xc000616780) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0108 21:21:10.003347 385 log.go:172] (0xc000227130) Data frame received for 1\nI0108 21:21:10.003519 385 log.go:172] (0xc000227130) (0xc000616780) Stream removed, broadcasting: 3\nI0108 21:21:10.003586 385 log.go:172] (0xc000641d60) (1) Data frame handling\nI0108 21:21:10.003681 385 log.go:172] (0xc000641d60) (1) Data frame sent\nI0108 21:21:10.003941 385 log.go:172] (0xc000227130) (0xc0006f0000) Stream removed, broadcasting: 5\nI0108 21:21:10.004261 385 log.go:172] (0xc000227130) (0xc000641d60) Stream removed, broadcasting: 1\nI0108 21:21:10.004407 385 log.go:172] (0xc000227130) Go away received\nI0108 21:21:10.006394 385 log.go:172] (0xc000227130) (0xc000641d60) Stream removed, broadcasting: 1\nI0108 21:21:10.006449 385 log.go:172] (0xc000227130) (0xc000616780) Stream removed, broadcasting: 3\nI0108 21:21:10.006460 385 log.go:172] (0xc000227130) (0xc0006f0000) Stream removed, broadcasting: 5\n" Jan 8 21:21:10.018: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 8 21:21:10.018: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 8 21:21:10.034: INFO: Waiting for StatefulSet statefulset-1347/ss2 to complete update Jan 8 21:21:10.034: INFO: Waiting for Pod statefulset-1347/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 8 21:21:10.034: INFO: Waiting for Pod statefulset-1347/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 8 21:21:10.034: INFO: Waiting for Pod statefulset-1347/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 8 21:21:20.048: INFO: Waiting for StatefulSet statefulset-1347/ss2 to complete update Jan 8 21:21:20.048: INFO: Waiting for Pod statefulset-1347/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 8 21:21:20.048: INFO: Waiting for Pod statefulset-1347/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 8 21:21:20.048: INFO: Waiting for Pod statefulset-1347/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 8 21:21:30.134: INFO: Waiting for StatefulSet statefulset-1347/ss2 to complete update Jan 8 21:21:30.134: INFO: Waiting for Pod statefulset-1347/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 8 21:21:30.134: INFO: Waiting for Pod statefulset-1347/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 8 21:21:40.048: INFO: Waiting for StatefulSet statefulset-1347/ss2 to complete update Jan 8 21:21:40.048: INFO: Waiting for Pod statefulset-1347/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 8 21:21:50.050: INFO: Waiting for StatefulSet statefulset-1347/ss2 to complete update Jan 8 21:21:50.050: INFO: Waiting for Pod statefulset-1347/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jan 8 21:22:00.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1347 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 8 21:22:00.635: INFO: stderr: "I0108 21:22:00.285644 406 log.go:172] (0xc000c4b340) (0xc0009ec500) Create stream\nI0108 21:22:00.285964 406 log.go:172] (0xc000c4b340) (0xc0009ec500) Stream added, broadcasting: 1\nI0108 21:22:00.291083 406 log.go:172] (0xc000c4b340) Reply frame received for 1\nI0108 21:22:00.291164 406 log.go:172] (0xc000c4b340) (0xc000ca8640) Create stream\nI0108 21:22:00.291184 406 log.go:172] (0xc000c4b340) (0xc000ca8640) Stream added, broadcasting: 3\nI0108 21:22:00.295576 406 log.go:172] (0xc000c4b340) Reply frame received for 3\nI0108 21:22:00.295602 406 log.go:172] (0xc000c4b340) (0xc000ca86e0) Create stream\nI0108 21:22:00.295610 406 log.go:172] (0xc000c4b340) (0xc000ca86e0) Stream added, broadcasting: 5\nI0108 21:22:00.296935 406 log.go:172] (0xc000c4b340) Reply frame received for 5\nI0108 21:22:00.383787 406 log.go:172] (0xc000c4b340) Data frame received for 5\nI0108 21:22:00.383946 406 log.go:172] (0xc000ca86e0) (5) Data frame handling\nI0108 21:22:00.383973 406 log.go:172] (0xc000ca86e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:22:00.459463 406 log.go:172] (0xc000c4b340) Data frame received for 3\nI0108 21:22:00.460105 406 log.go:172] (0xc000ca8640) (3) Data frame handling\nI0108 21:22:00.460224 406 log.go:172] (0xc000ca8640) (3) Data frame sent\nI0108 21:22:00.622988 406 log.go:172] (0xc000c4b340) (0xc000ca8640) Stream removed, broadcasting: 3\nI0108 21:22:00.623106 406 log.go:172] (0xc000c4b340) Data frame received for 1\nI0108 21:22:00.623125 406 log.go:172] (0xc0009ec500) (1) Data frame handling\nI0108 21:22:00.623139 406 log.go:172] (0xc0009ec500) (1) Data frame sent\nI0108 21:22:00.623147 406 log.go:172] (0xc000c4b340) (0xc0009ec500) Stream removed, broadcasting: 1\nI0108 21:22:00.624678 406 log.go:172] (0xc000c4b340) (0xc000ca86e0) Stream removed, broadcasting: 5\nI0108 21:22:00.624740 406 log.go:172] (0xc000c4b340) (0xc0009ec500) Stream removed, broadcasting: 1\nI0108 21:22:00.624756 406 log.go:172] (0xc000c4b340) (0xc000ca8640) Stream removed, broadcasting: 3\nI0108 21:22:00.624768 406 log.go:172] (0xc000c4b340) (0xc000ca86e0) Stream removed, broadcasting: 5\nI0108 21:22:00.624853 406 log.go:172] (0xc000c4b340) Go away received\n" Jan 8 21:22:00.635: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 8 21:22:00.635: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 8 21:22:00.703: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 8 21:22:10.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1347 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 8 21:22:11.109: INFO: stderr: "I0108 21:22:10.953389 428 log.go:172] (0xc0009d0f20) (0xc00093e280) Create stream\nI0108 21:22:10.953736 428 log.go:172] (0xc0009d0f20) (0xc00093e280) Stream added, broadcasting: 1\nI0108 21:22:10.962238 428 log.go:172] (0xc0009d0f20) Reply frame received for 1\nI0108 21:22:10.962344 428 log.go:172] (0xc0009d0f20) (0xc0006c7b80) Create stream\nI0108 21:22:10.962362 428 log.go:172] (0xc0009d0f20) (0xc0006c7b80) Stream added, broadcasting: 3\nI0108 21:22:10.965612 428 log.go:172] (0xc0009d0f20) Reply frame received for 3\nI0108 21:22:10.965681 428 log.go:172] (0xc0009d0f20) (0xc000676780) Create stream\nI0108 21:22:10.965715 428 log.go:172] (0xc0009d0f20) (0xc000676780) Stream added, broadcasting: 5\nI0108 21:22:10.968169 428 log.go:172] (0xc0009d0f20) Reply frame received for 5\nI0108 21:22:11.033591 428 log.go:172] (0xc0009d0f20) Data frame received for 3\nI0108 21:22:11.033688 428 log.go:172] (0xc0006c7b80) (3) Data frame handling\nI0108 21:22:11.033720 428 log.go:172] (0xc0006c7b80) (3) Data frame sent\nI0108 21:22:11.033786 428 log.go:172] (0xc0009d0f20) Data frame received for 5\nI0108 21:22:11.033805 428 log.go:172] (0xc000676780) (5) Data frame handling\nI0108 21:22:11.033817 428 log.go:172] (0xc000676780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0108 21:22:11.098881 428 log.go:172] (0xc0009d0f20) (0xc000676780) Stream removed, broadcasting: 5\nI0108 21:22:11.098998 428 log.go:172] (0xc0009d0f20) Data frame received for 1\nI0108 21:22:11.099015 428 log.go:172] (0xc00093e280) (1) Data frame handling\nI0108 21:22:11.099025 428 log.go:172] (0xc00093e280) (1) Data frame sent\nI0108 21:22:11.099062 428 log.go:172] (0xc0009d0f20) (0xc00093e280) Stream removed, broadcasting: 1\nI0108 21:22:11.099506 428 log.go:172] (0xc0009d0f20) (0xc0006c7b80) Stream removed, broadcasting: 3\nI0108 21:22:11.099524 428 log.go:172] (0xc0009d0f20) Go away received\nI0108 21:22:11.099711 428 log.go:172] (0xc0009d0f20) (0xc00093e280) Stream removed, broadcasting: 1\nI0108 21:22:11.099746 428 log.go:172] (0xc0009d0f20) (0xc0006c7b80) Stream removed, broadcasting: 3\nI0108 21:22:11.099757 428 log.go:172] (0xc0009d0f20) (0xc000676780) Stream removed, broadcasting: 5\n" Jan 8 21:22:11.110: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 8 21:22:11.110: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 8 21:22:21.140: INFO: Waiting for StatefulSet statefulset-1347/ss2 to complete update Jan 8 21:22:21.140: INFO: Waiting for Pod statefulset-1347/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 8 21:22:21.140: INFO: Waiting for Pod statefulset-1347/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 8 21:22:31.151: INFO: Waiting for StatefulSet statefulset-1347/ss2 to complete update Jan 8 21:22:31.152: INFO: Waiting for Pod statefulset-1347/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 8 21:22:41.153: INFO: Waiting for StatefulSet statefulset-1347/ss2 to complete update Jan 8 21:22:41.153: INFO: Waiting for Pod statefulset-1347/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 8 21:22:51.154: INFO: Deleting all statefulset in ns statefulset-1347 Jan 8 21:22:51.161: INFO: Scaling statefulset ss2 to 0 Jan 8 21:23:31.201: INFO: Waiting for statefulset status.replicas updated to 0 Jan 8 21:23:31.206: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:23:31.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1347" for this suite. • [SLOW TEST:192.381 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":39,"skipped":695,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:23:31.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362 STEP: creating the pod Jan 8 21:23:31.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6366' Jan 8 21:23:33.784: INFO: stderr: "" Jan 8 21:23:33.784: INFO: stdout: "pod/pause created\n" Jan 8 21:23:33.784: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 8 21:23:33.785: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6366" to be "running and ready" Jan 8 21:23:33.812: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 27.793922ms Jan 8 21:23:35.987: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202164554s Jan 8 21:23:37.991: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206676108s Jan 8 21:23:40.001: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216116399s Jan 8 21:23:42.012: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.227077832s Jan 8 21:23:42.012: INFO: Pod "pause" satisfied condition "running and ready" Jan 8 21:23:42.012: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jan 8 21:23:42.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6366' Jan 8 21:23:42.226: INFO: stderr: "" Jan 8 21:23:42.226: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 8 21:23:42.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6366' Jan 8 21:23:42.386: INFO: stderr: "" Jan 8 21:23:42.386: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 8 21:23:42.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6366' Jan 8 21:23:42.596: INFO: stderr: "" Jan 8 21:23:42.596: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 8 21:23:42.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6366' Jan 8 21:23:42.753: INFO: stderr: "" Jan 8 21:23:42.754: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369 STEP: using delete to clean up resources Jan 8 21:23:42.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6366' Jan 8 21:23:42.920: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 8 21:23:42.920: INFO: stdout: "pod \"pause\" force deleted\n" Jan 8 21:23:42.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6366' Jan 8 21:23:43.109: INFO: stderr: "No resources found in kubectl-6366 namespace.\n" Jan 8 21:23:43.109: INFO: stdout: "" Jan 8 21:23:43.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6366 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 8 21:23:43.237: INFO: stderr: "" Jan 8 21:23:43.237: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:23:43.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6366" for this suite. • [SLOW TEST:11.995 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":40,"skipped":707,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:23:43.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jan 8 21:23:51.940: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4269 pod-service-account-63d4525c-b984-4ffb-83bd-07b8c2e99de1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 8 21:23:52.427: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4269 pod-service-account-63d4525c-b984-4ffb-83bd-07b8c2e99de1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 8 21:23:52.842: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4269 pod-service-account-63d4525c-b984-4ffb-83bd-07b8c2e99de1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:23:53.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4269" for this suite. • [SLOW TEST:10.007 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":41,"skipped":725,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:23:53.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:23:53.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5610" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":42,"skipped":727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:23:53.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 8 21:23:53.749: INFO: Waiting up to 5m0s for pod "downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b" in namespace "downward-api-4943" to be "success or failure" Jan 8 21:23:53.764: INFO: Pod "downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.182546ms Jan 8 21:23:55.773: INFO: Pod "downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02377132s Jan 8 21:23:57.782: INFO: Pod "downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03271174s Jan 8 21:23:59.829: INFO: Pod "downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079423882s Jan 8 21:24:01.835: INFO: Pod "downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085698406s STEP: Saw pod success Jan 8 21:24:01.835: INFO: Pod "downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b" satisfied condition "success or failure" Jan 8 21:24:01.839: INFO: Trying to get logs from node jerma-node pod downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b container dapi-container: STEP: delete the pod Jan 8 21:24:01.929: INFO: Waiting for pod downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b to disappear Jan 8 21:24:01.944: INFO: Pod downward-api-347664c9-411d-4ac5-aa29-aef5ad83da5b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:24:01.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4943" for this suite. • [SLOW TEST:8.437 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:24:01.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:24:10.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7679" for this suite. • [SLOW TEST:8.161 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":797,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:24:10.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-540c8b30-b633-469c-a59f-5afd224c349c STEP: Creating a pod to test consume secrets Jan 8 21:24:10.265: INFO: Waiting up to 5m0s for pod "pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec" in namespace "secrets-6308" to be "success or failure" Jan 8 21:24:10.279: INFO: Pod "pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec": Phase="Pending", Reason="", readiness=false. Elapsed: 14.395426ms Jan 8 21:24:12.286: INFO: Pod "pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020875848s Jan 8 21:24:14.292: INFO: Pod "pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027010373s Jan 8 21:24:16.295: INFO: Pod "pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030197536s Jan 8 21:24:18.315: INFO: Pod "pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05004453s STEP: Saw pod success Jan 8 21:24:18.315: INFO: Pod "pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec" satisfied condition "success or failure" Jan 8 21:24:18.321: INFO: Trying to get logs from node jerma-node pod pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec container secret-volume-test: STEP: delete the pod Jan 8 21:24:18.465: INFO: Waiting for pod pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec to disappear Jan 8 21:24:18.477: INFO: Pod pod-secrets-d7077372-d60a-4b3b-96e6-b4383db8f1ec no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:24:18.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6308" for this suite. • [SLOW TEST:8.352 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":814,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:24:18.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-fd2418e4-7d1d-4c3f-ab89-6d1f385f15c6 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:24:18.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2199" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":46,"skipped":825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:24:18.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:24:18.852: INFO: Creating ReplicaSet my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525 Jan 8 21:24:18.886: INFO: Pod name my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525: Found 0 pods out of 1 Jan 8 21:24:23.954: INFO: Pod name my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525: Found 1 pods out of 1 Jan 8 21:24:23.954: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525" is running Jan 8 21:24:25.971: INFO: Pod "my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525-b9ff2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 21:24:18 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 21:24:18 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 21:24:18 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 21:24:18 +0000 UTC Reason: Message:}]) Jan 8 21:24:25.971: INFO: Trying to dial the pod Jan 8 21:24:30.992: INFO: Controller my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525: Got expected result from replica 1 [my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525-b9ff2]: "my-hostname-basic-9599c2c3-77f6-47ab-9534-b58192a2f525-b9ff2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:24:30.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6615" for this suite. • [SLOW TEST:12.313 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":47,"skipped":896,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:24:31.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6 Jan 8 21:24:31.138: INFO: Pod name my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6: Found 0 pods out of 1 Jan 8 21:24:36.176: INFO: Pod name my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6: Found 1 pods out of 1 Jan 8 21:24:36.176: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6" are running Jan 8 21:24:38.195: INFO: Pod "my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6-brfzn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 21:24:31 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 21:24:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 21:24:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 21:24:31 +0000 UTC Reason: Message:}]) Jan 8 21:24:38.196: INFO: Trying to dial the pod Jan 8 21:24:43.217: INFO: Controller my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6: Got expected result from replica 1 [my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6-brfzn]: "my-hostname-basic-6234204d-0634-47c6-b039-90c7925356b6-brfzn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:24:43.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7463" for this suite. • [SLOW TEST:12.216 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":48,"skipped":899,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:24:43.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:25:43.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8799" for this suite. • [SLOW TEST:60.164 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":900,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:25:43.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:25:43.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 8 21:25:44.208: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-08T21:25:44Z generation:1 name:name1 resourceVersion:886129 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3a426416-7fee-4c06-91d6-b0b733f0bbc7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 8 21:25:54.216: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-08T21:25:54Z generation:1 name:name2 resourceVersion:886168 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1cefe38f-c273-483c-882c-2d6d709924a0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 8 21:26:04.223: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-08T21:25:44Z generation:2 name:name1 resourceVersion:886192 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3a426416-7fee-4c06-91d6-b0b733f0bbc7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 8 21:26:14.233: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-08T21:25:54Z generation:2 name:name2 resourceVersion:886216 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1cefe38f-c273-483c-882c-2d6d709924a0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 8 21:26:24.252: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-08T21:25:44Z generation:2 name:name1 resourceVersion:886240 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3a426416-7fee-4c06-91d6-b0b733f0bbc7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 8 21:26:34.267: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-08T21:25:54Z generation:2 name:name2 resourceVersion:886262 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1cefe38f-c273-483c-882c-2d6d709924a0] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:26:44.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8692" for this suite. • [SLOW TEST:61.417 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":50,"skipped":916,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:26:44.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 8 21:26:44.983: INFO: Number of nodes with available pods: 0 Jan 8 21:26:44.983: INFO: Node jerma-node is running more than one daemon pod Jan 8 21:26:45.995: INFO: Number of nodes with available pods: 0 Jan 8 21:26:45.995: INFO: Node jerma-node is running more than one daemon pod Jan 8 21:26:46.997: INFO: Number of nodes with available pods: 0 Jan 8 21:26:46.997: INFO: Node jerma-node is running more than one daemon pod Jan 8 21:26:47.996: INFO: Number of nodes with available pods: 0 Jan 8 21:26:47.996: INFO: Node jerma-node is running more than one daemon pod Jan 8 21:26:48.992: INFO: Number of nodes with available pods: 0 Jan 8 21:26:48.992: INFO: Node jerma-node is running more than one daemon pod Jan 8 21:26:50.145: INFO: Number of nodes with available pods: 0 Jan 8 21:26:50.145: INFO: Node jerma-node is running more than one daemon pod Jan 8 21:26:51.062: INFO: Number of nodes with available pods: 0 Jan 8 21:26:51.062: INFO: Node jerma-node is running more than one daemon pod Jan 8 21:26:51.997: INFO: Number of nodes with available pods: 0 Jan 8 21:26:51.997: INFO: Node jerma-node is running more than one daemon pod Jan 8 21:26:52.992: INFO: Number of nodes with available pods: 0 Jan 8 21:26:52.992: INFO: Node jerma-node is running more than one daemon pod Jan 8 21:26:54.002: INFO: Number of nodes with available pods: 2 Jan 8 21:26:54.003: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 8 21:26:54.097: INFO: Number of nodes with available pods: 1 Jan 8 21:26:54.097: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 8 21:26:55.110: INFO: Number of nodes with available pods: 1 Jan 8 21:26:55.110: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 8 21:26:56.264: INFO: Number of nodes with available pods: 1 Jan 8 21:26:56.264: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 8 21:26:57.114: INFO: Number of nodes with available pods: 1 Jan 8 21:26:57.114: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 8 21:26:58.111: INFO: Number of nodes with available pods: 1 Jan 8 21:26:58.111: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 8 21:27:00.064: INFO: Number of nodes with available pods: 1 Jan 8 21:27:00.064: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 8 21:27:00.449: INFO: Number of nodes with available pods: 1 Jan 8 21:27:00.449: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 8 21:27:01.110: INFO: Number of nodes with available pods: 1 Jan 8 21:27:01.110: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 8 21:27:02.109: INFO: Number of nodes with available pods: 1 Jan 8 21:27:02.109: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 8 21:27:03.112: INFO: Number of nodes with available pods: 2 Jan 8 21:27:03.112: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5885, will wait for the garbage collector to delete the pods Jan 8 21:27:03.188: INFO: Deleting DaemonSet.extensions daemon-set took: 16.393498ms Jan 8 21:27:03.489: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.497684ms Jan 8 21:27:13.204: INFO: Number of nodes with available pods: 0 Jan 8 21:27:13.204: INFO: Number of running nodes: 0, number of available pods: 0 Jan 8 21:27:13.218: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5885/daemonsets","resourceVersion":"886412"},"items":null} Jan 8 21:27:13.222: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5885/pods","resourceVersion":"886412"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:27:13.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5885" for this suite. • [SLOW TEST:28.436 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":51,"skipped":919,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:27:13.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 8 21:27:21.929: INFO: Successfully updated pod "labelsupdate0c01b109-79da-4dea-b1b2-215d96ea89ac" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:27:23.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5393" for this suite. • [SLOW TEST:10.737 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":922,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:27:23.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 8 21:27:24.986: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 8 21:27:27.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:27:29.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:27:31.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:27:33.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115645, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 8 21:27:36.048: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:27:36.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5243" for this suite. STEP: Destroying namespace "webhook-5243-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.395 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":53,"skipped":935,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:27:36.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-8e493076-ac7f-422d-bc52-16e93ed64449 STEP: Creating a pod to test consume secrets Jan 8 21:27:36.446: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a" in namespace "projected-2097" to be "success or failure" Jan 8 21:27:36.449: INFO: Pod "pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.474524ms Jan 8 21:27:38.464: INFO: Pod "pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018350528s Jan 8 21:27:40.478: INFO: Pod "pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032381639s Jan 8 21:27:42.765: INFO: Pod "pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.319446442s Jan 8 21:27:44.773: INFO: Pod "pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.327543642s Jan 8 21:27:46.779: INFO: Pod "pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.333065022s STEP: Saw pod success Jan 8 21:27:46.779: INFO: Pod "pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a" satisfied condition "success or failure" Jan 8 21:27:46.781: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a container projected-secret-volume-test: STEP: delete the pod Jan 8 21:27:46.900: INFO: Waiting for pod pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a to disappear Jan 8 21:27:46.906: INFO: Pod pod-projected-secrets-0f8f8aaf-4313-4bd7-8588-dc4f93866d2a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:27:46.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2097" for this suite. • [SLOW TEST:10.532 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":979,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:27:46.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-5ac6e66c-c1ab-4cdb-8ffe-476f8857f41b STEP: Creating a pod to test consume configMaps Jan 8 21:27:46.987: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2" in namespace "projected-8436" to be "success or failure" Jan 8 21:27:47.037: INFO: Pod "pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2": Phase="Pending", Reason="", readiness=false. Elapsed: 49.826093ms Jan 8 21:27:49.044: INFO: Pod "pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056919834s Jan 8 21:27:51.052: INFO: Pod "pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065225827s Jan 8 21:27:53.058: INFO: Pod "pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071024155s Jan 8 21:27:55.064: INFO: Pod "pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077141106s STEP: Saw pod success Jan 8 21:27:55.064: INFO: Pod "pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2" satisfied condition "success or failure" Jan 8 21:27:55.068: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2 container projected-configmap-volume-test: STEP: delete the pod Jan 8 21:27:55.140: INFO: Waiting for pod pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2 to disappear Jan 8 21:27:55.151: INFO: Pod pod-projected-configmaps-051d5148-5c2f-4d11-b0e4-ed34bc9f86d2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:27:55.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8436" for this suite. • [SLOW TEST:8.281 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":1009,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:27:55.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-581, will wait for the garbage collector to delete the pods Jan 8 21:28:03.365: INFO: Deleting Job.batch foo took: 11.820234ms Jan 8 21:28:03.665: INFO: Terminating Job.batch foo pods took: 300.502092ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:28:42.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-581" for this suite. • [SLOW TEST:47.333 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":56,"skipped":1020,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:28:42.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-491 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 8 21:28:42.682: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 8 21:29:12.961: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-491 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:29:12.961: INFO: >>> kubeConfig: /root/.kube/config I0108 21:29:13.031709 9 log.go:172] (0xc002932630) (0xc0018a3360) Create stream I0108 21:29:13.031806 9 log.go:172] (0xc002932630) (0xc0018a3360) Stream added, broadcasting: 1 I0108 21:29:13.036246 9 log.go:172] (0xc002932630) Reply frame received for 1 I0108 21:29:13.036293 9 log.go:172] (0xc002932630) (0xc001c5c320) Create stream I0108 21:29:13.036305 9 log.go:172] (0xc002932630) (0xc001c5c320) Stream added, broadcasting: 3 I0108 21:29:13.038179 9 log.go:172] (0xc002932630) Reply frame received for 3 I0108 21:29:13.038201 9 log.go:172] (0xc002932630) (0xc001c5c500) Create stream I0108 21:29:13.038211 9 log.go:172] (0xc002932630) (0xc001c5c500) Stream added, broadcasting: 5 I0108 21:29:13.040279 9 log.go:172] (0xc002932630) Reply frame received for 5 I0108 21:29:13.131815 9 log.go:172] (0xc002932630) Data frame received for 3 I0108 21:29:13.131911 9 log.go:172] (0xc001c5c320) (3) Data frame handling I0108 21:29:13.131939 9 log.go:172] (0xc001c5c320) (3) Data frame sent I0108 21:29:13.207154 9 log.go:172] (0xc002932630) (0xc001c5c320) Stream removed, broadcasting: 3 I0108 21:29:13.207296 9 log.go:172] (0xc002932630) Data frame received for 1 I0108 21:29:13.207340 9 log.go:172] (0xc0018a3360) (1) Data frame handling I0108 21:29:13.207374 9 log.go:172] (0xc002932630) (0xc001c5c500) Stream removed, broadcasting: 5 I0108 21:29:13.207414 9 log.go:172] (0xc0018a3360) (1) Data frame sent I0108 21:29:13.207454 9 log.go:172] (0xc002932630) (0xc0018a3360) Stream removed, broadcasting: 1 I0108 21:29:13.207523 9 log.go:172] (0xc002932630) Go away received I0108 21:29:13.207758 9 log.go:172] (0xc002932630) (0xc0018a3360) Stream removed, broadcasting: 1 I0108 21:29:13.207777 9 log.go:172] (0xc002932630) (0xc001c5c320) Stream removed, broadcasting: 3 I0108 21:29:13.207791 9 log.go:172] (0xc002932630) (0xc001c5c500) Stream removed, broadcasting: 5 Jan 8 21:29:13.207: INFO: Found all expected endpoints: [netserver-0] Jan 8 21:29:13.216: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-491 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 8 21:29:13.216: INFO: >>> kubeConfig: /root/.kube/config I0108 21:29:13.263986 9 log.go:172] (0xc002ad5ad0) (0xc0017ada40) Create stream I0108 21:29:13.264191 9 log.go:172] (0xc002ad5ad0) (0xc0017ada40) Stream added, broadcasting: 1 I0108 21:29:13.271339 9 log.go:172] (0xc002ad5ad0) Reply frame received for 1 I0108 21:29:13.271443 9 log.go:172] (0xc002ad5ad0) (0xc001c5c640) Create stream I0108 21:29:13.271457 9 log.go:172] (0xc002ad5ad0) (0xc001c5c640) Stream added, broadcasting: 3 I0108 21:29:13.272906 9 log.go:172] (0xc002ad5ad0) Reply frame received for 3 I0108 21:29:13.272934 9 log.go:172] (0xc002ad5ad0) (0xc0014c7ae0) Create stream I0108 21:29:13.272945 9 log.go:172] (0xc002ad5ad0) (0xc0014c7ae0) Stream added, broadcasting: 5 I0108 21:29:13.275154 9 log.go:172] (0xc002ad5ad0) Reply frame received for 5 I0108 21:29:13.348815 9 log.go:172] (0xc002ad5ad0) Data frame received for 3 I0108 21:29:13.348921 9 log.go:172] (0xc001c5c640) (3) Data frame handling I0108 21:29:13.348958 9 log.go:172] (0xc001c5c640) (3) Data frame sent I0108 21:29:13.412079 9 log.go:172] (0xc002ad5ad0) Data frame received for 1 I0108 21:29:13.412167 9 log.go:172] (0xc002ad5ad0) (0xc001c5c640) Stream removed, broadcasting: 3 I0108 21:29:13.412195 9 log.go:172] (0xc0017ada40) (1) Data frame handling I0108 21:29:13.412212 9 log.go:172] (0xc0017ada40) (1) Data frame sent I0108 21:29:13.412224 9 log.go:172] (0xc002ad5ad0) (0xc0017ada40) Stream removed, broadcasting: 1 I0108 21:29:13.412446 9 log.go:172] (0xc002ad5ad0) (0xc0014c7ae0) Stream removed, broadcasting: 5 I0108 21:29:13.412467 9 log.go:172] (0xc002ad5ad0) Go away received I0108 21:29:13.413173 9 log.go:172] (0xc002ad5ad0) (0xc0017ada40) Stream removed, broadcasting: 1 I0108 21:29:13.413273 9 log.go:172] (0xc002ad5ad0) (0xc001c5c640) Stream removed, broadcasting: 3 I0108 21:29:13.413299 9 log.go:172] (0xc002ad5ad0) (0xc0014c7ae0) Stream removed, broadcasting: 5 Jan 8 21:29:13.413: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:29:13.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-491" for this suite. • [SLOW TEST:30.889 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1022,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:29:13.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 8 21:29:13.519: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jan 8 21:29:13.943: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 8 21:29:16.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:29:18.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:29:20.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:29:22.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:29:24.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115753, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:29:28.800: INFO: Waited 2.603838111s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:29:29.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6650" for this suite. • [SLOW TEST:15.861 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":58,"skipped":1024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:29:29.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:29:29.376: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 8 21:29:34.394: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 8 21:29:38.423: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 8 21:29:38.455: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7214 /apis/apps/v1/namespaces/deployment-7214/deployments/test-cleanup-deployment 925cd05b-55f2-418e-8b3a-4b6096715e41 887122 1 2020-01-08 21:29:38 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001a2cbd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 8 21:29:38.506: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7214 /apis/apps/v1/namespaces/deployment-7214/replicasets/test-cleanup-deployment-55ffc6b7b6 57abf4d9-6a3d-4226-8388-7d9c9c6ab7e0 887124 1 2020-01-08 21:29:38 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 925cd05b-55f2-418e-8b3a-4b6096715e41 0xc001a2cfd7 0xc001a2cfd8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001a2d048 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 8 21:29:38.506: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 8 21:29:38.506: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7214 /apis/apps/v1/namespaces/deployment-7214/replicasets/test-cleanup-controller 96740bba-7dd4-4e8b-895d-2eb64cd031a0 887123 1 2020-01-08 21:29:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 925cd05b-55f2-418e-8b3a-4b6096715e41 0xc001a2cf07 0xc001a2cf08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001a2cf68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 8 21:29:38.536: INFO: Pod "test-cleanup-controller-m6n52" is available: &Pod{ObjectMeta:{test-cleanup-controller-m6n52 test-cleanup-controller- deployment-7214 /api/v1/namespaces/deployment-7214/pods/test-cleanup-controller-m6n52 6cee01f4-e8d8-4a54-a601-6b96c86f4304 887120 0 2020-01-08 21:29:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 96740bba-7dd4-4e8b-895d-2eb64cd031a0 0xc001a2d587 0xc001a2d588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lfpbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lfpbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lfpbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:29:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:29:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:29:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:29:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-08 21:29:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:29:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6ebeed546e5541a5f5627fc8f8f0f5db1d6b97148e09199a15944499c750235e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 8 21:29:38.537: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-ktrjb" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-ktrjb test-cleanup-deployment-55ffc6b7b6- deployment-7214 /api/v1/namespaces/deployment-7214/pods/test-cleanup-deployment-55ffc6b7b6-ktrjb 27ddbf0b-4dee-4e7c-b269-9446a1d74787 887130 0 2020-01-08 21:29:38 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 57abf4d9-6a3d-4226-8388-7d9c9c6ab7e0 0xc001a2d707 0xc001a2d708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lfpbh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lfpbh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lfpbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:29:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:29:38.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7214" for this suite. • [SLOW TEST:9.410 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":59,"skipped":1055,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:29:38.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Jan 8 21:29:38.779: INFO: Waiting up to 5m0s for pod "var-expansion-1ed0f85c-8022-4280-a58c-130182eff536" in namespace "var-expansion-9430" to be "success or failure" Jan 8 21:29:38.789: INFO: Pod "var-expansion-1ed0f85c-8022-4280-a58c-130182eff536": Phase="Pending", Reason="", readiness=false. Elapsed: 9.467744ms Jan 8 21:29:40.850: INFO: Pod "var-expansion-1ed0f85c-8022-4280-a58c-130182eff536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071015029s Jan 8 21:29:42.867: INFO: Pod "var-expansion-1ed0f85c-8022-4280-a58c-130182eff536": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087920293s Jan 8 21:29:44.883: INFO: Pod "var-expansion-1ed0f85c-8022-4280-a58c-130182eff536": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103620648s Jan 8 21:29:46.888: INFO: Pod "var-expansion-1ed0f85c-8022-4280-a58c-130182eff536": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109129029s Jan 8 21:29:48.901: INFO: Pod "var-expansion-1ed0f85c-8022-4280-a58c-130182eff536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122028233s STEP: Saw pod success Jan 8 21:29:48.901: INFO: Pod "var-expansion-1ed0f85c-8022-4280-a58c-130182eff536" satisfied condition "success or failure" Jan 8 21:29:48.907: INFO: Trying to get logs from node jerma-node pod var-expansion-1ed0f85c-8022-4280-a58c-130182eff536 container dapi-container: STEP: delete the pod Jan 8 21:29:49.086: INFO: Waiting for pod var-expansion-1ed0f85c-8022-4280-a58c-130182eff536 to disappear Jan 8 21:29:49.094: INFO: Pod var-expansion-1ed0f85c-8022-4280-a58c-130182eff536 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:29:49.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9430" for this suite. • [SLOW TEST:10.407 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1066,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:29:49.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0108 21:30:29.547489 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 8 21:30:29.547: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:30:29.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3896" for this suite. • [SLOW TEST:40.453 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":61,"skipped":1109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:30:29.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:30:37.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2730" for this suite. • [SLOW TEST:8.681 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":62,"skipped":1138,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:30:38.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 8 21:30:38.811: INFO: Waiting up to 5m0s for pod "downward-api-c6c35796-3e1c-48cd-806d-99ded4201444" in namespace "downward-api-6467" to be "success or failure" Jan 8 21:30:38.902: INFO: Pod "downward-api-c6c35796-3e1c-48cd-806d-99ded4201444": Phase="Pending", Reason="", readiness=false. Elapsed: 90.972849ms Jan 8 21:30:41.428: INFO: Pod "downward-api-c6c35796-3e1c-48cd-806d-99ded4201444": Phase="Pending", Reason="", readiness=false. Elapsed: 2.616132664s Jan 8 21:30:43.555: INFO: Pod "downward-api-c6c35796-3e1c-48cd-806d-99ded4201444": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74385079s Jan 8 21:30:45.561: INFO: Pod "downward-api-c6c35796-3e1c-48cd-806d-99ded4201444": Phase="Pending", Reason="", readiness=false. Elapsed: 6.749095328s Jan 8 21:30:47.568: INFO: Pod "downward-api-c6c35796-3e1c-48cd-806d-99ded4201444": Phase="Pending", Reason="", readiness=false. Elapsed: 8.75682633s Jan 8 21:30:49.575: INFO: Pod "downward-api-c6c35796-3e1c-48cd-806d-99ded4201444": Phase="Pending", Reason="", readiness=false. Elapsed: 10.763238801s Jan 8 21:30:51.587: INFO: Pod "downward-api-c6c35796-3e1c-48cd-806d-99ded4201444": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.775509438s STEP: Saw pod success Jan 8 21:30:51.587: INFO: Pod "downward-api-c6c35796-3e1c-48cd-806d-99ded4201444" satisfied condition "success or failure" Jan 8 21:30:51.592: INFO: Trying to get logs from node jerma-node pod downward-api-c6c35796-3e1c-48cd-806d-99ded4201444 container dapi-container: STEP: delete the pod Jan 8 21:30:51.661: INFO: Waiting for pod downward-api-c6c35796-3e1c-48cd-806d-99ded4201444 to disappear Jan 8 21:30:51.669: INFO: Pod downward-api-c6c35796-3e1c-48cd-806d-99ded4201444 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:30:51.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6467" for this suite. • [SLOW TEST:13.441 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1155,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:30:51.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-337de6f9-593f-4cc6-b390-84bc95211b95 STEP: Creating a pod to test consume secrets Jan 8 21:30:51.877: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767" in namespace "projected-7627" to be "success or failure" Jan 8 21:30:51.892: INFO: Pod "pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767": Phase="Pending", Reason="", readiness=false. Elapsed: 14.537276ms Jan 8 21:30:53.900: INFO: Pod "pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022165243s Jan 8 21:30:55.906: INFO: Pod "pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02849287s Jan 8 21:30:57.913: INFO: Pod "pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035242472s Jan 8 21:30:59.921: INFO: Pod "pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04369073s STEP: Saw pod success Jan 8 21:30:59.921: INFO: Pod "pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767" satisfied condition "success or failure" Jan 8 21:30:59.926: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767 container projected-secret-volume-test: STEP: delete the pod Jan 8 21:31:00.036: INFO: Waiting for pod pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767 to disappear Jan 8 21:31:00.043: INFO: Pod pod-projected-secrets-2858a9aa-94d5-4b09-860a-265df24a5767 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:31:00.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7627" for this suite. • [SLOW TEST:8.368 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1175,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:31:00.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:31:00.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3173' Jan 8 21:31:00.606: INFO: stderr: "" Jan 8 21:31:00.606: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jan 8 21:31:00.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3173' Jan 8 21:31:01.104: INFO: stderr: "" Jan 8 21:31:01.104: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 8 21:31:02.120: INFO: Selector matched 1 pods for map[app:agnhost] Jan 8 21:31:02.120: INFO: Found 0 / 1 Jan 8 21:31:03.121: INFO: Selector matched 1 pods for map[app:agnhost] Jan 8 21:31:03.121: INFO: Found 0 / 1 Jan 8 21:31:04.113: INFO: Selector matched 1 pods for map[app:agnhost] Jan 8 21:31:04.113: INFO: Found 0 / 1 Jan 8 21:31:05.119: INFO: Selector matched 1 pods for map[app:agnhost] Jan 8 21:31:05.120: INFO: Found 0 / 1 Jan 8 21:31:06.114: INFO: Selector matched 1 pods for map[app:agnhost] Jan 8 21:31:06.114: INFO: Found 0 / 1 Jan 8 21:31:07.148: INFO: Selector matched 1 pods for map[app:agnhost] Jan 8 21:31:07.148: INFO: Found 1 / 1 Jan 8 21:31:07.148: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 8 21:31:07.152: INFO: Selector matched 1 pods for map[app:agnhost] Jan 8 21:31:07.152: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 8 21:31:07.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-vkn7q --namespace=kubectl-3173' Jan 8 21:31:07.285: INFO: stderr: "" Jan 8 21:31:07.285: INFO: stdout: "Name: agnhost-master-vkn7q\nNamespace: kubectl-3173\nPriority: 0\nNode: jerma-node/10.96.2.250\nStart Time: Wed, 08 Jan 2020 21:31:00 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nIPs:\n IP: 10.44.0.1\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: docker://179821af82b509529d039ef2519b508b72ac29fde0f0862b993ed84e33e0ce3e\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 08 Jan 2020 21:31:05 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-xwhvx (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-xwhvx:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-xwhvx\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-3173/agnhost-master-vkn7q to jerma-node\n Normal Pulled 4s kubelet, jerma-node Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-node Created container agnhost-master\n Normal Started 2s kubelet, jerma-node Started container agnhost-master\n" Jan 8 21:31:07.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3173' Jan 8 21:31:07.446: INFO: stderr: "" Jan 8 21:31:07.446: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3173\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-vkn7q\n" Jan 8 21:31:07.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3173' Jan 8 21:31:07.585: INFO: stderr: "" Jan 8 21:31:07.585: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3173\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.141.21\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 8 21:31:07.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node' Jan 8 21:31:07.712: INFO: stderr: "" Jan 8 21:31:07.712: INFO: stdout: "Name: jerma-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 04 Jan 2020 11:59:52 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: jerma-node\n AcquireTime: \n RenewTime: Wed, 08 Jan 2020 21:31:02 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 04 Jan 2020 12:00:49 +0000 Sat, 04 Jan 2020 12:00:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Wed, 08 Jan 2020 21:28:59 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 08 Jan 2020 21:28:59 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 08 Jan 2020 21:28:59 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 08 Jan 2020 21:28:59 +0000 Sat, 04 Jan 2020 12:00:52 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.2.250\n Hostname: jerma-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: bdc16344252549dd902c3a5d68b22f41\n System UUID: BDC16344-2525-49DD-902C-3A5D68B22F41\n Boot ID: eec61fc4-8bf6-487f-8f93-ea9731fe757a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-dsf66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n kube-system weave-net-kz8lv 20m (0%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n kubectl-3173 agnhost-master-vkn7q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 8 21:31:07.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3173' Jan 8 21:31:07.808: INFO: stderr: "" Jan 8 21:31:07.808: INFO: stdout: "Name: kubectl-3173\nLabels: e2e-framework=kubectl\n e2e-run=b1feba36-0fc8-4ca2-a7ec-3dd412c5917a\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:31:07.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3173" for this suite. • [SLOW TEST:7.762 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":65,"skipped":1184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:31:07.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-7874 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7874 STEP: Deleting pre-stop pod Jan 8 21:31:27.038: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:31:27.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7874" for this suite. • [SLOW TEST:19.271 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":66,"skipped":1225,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:31:27.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 8 21:31:27.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8429' Jan 8 21:31:27.413: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 8 21:31:27.413: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Jan 8 21:31:29.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8429' Jan 8 21:31:29.907: INFO: stderr: "" Jan 8 21:31:29.907: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:31:29.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8429" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":67,"skipped":1238,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:31:29.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 8 21:31:30.092: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 8 21:31:42.526: INFO: >>> kubeConfig: /root/.kube/config Jan 8 21:31:45.574: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:31:57.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8153" for this suite. • [SLOW TEST:28.026 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":68,"skipped":1249,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:31:57.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 8 21:32:12.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 8 21:32:12.438: INFO: Pod pod-with-prestop-http-hook still exists Jan 8 21:32:14.438: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 8 21:32:14.444: INFO: Pod pod-with-prestop-http-hook still exists Jan 8 21:32:16.439: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 8 21:32:16.457: INFO: Pod pod-with-prestop-http-hook still exists Jan 8 21:32:18.438: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 8 21:32:18.448: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:32:18.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4738" for this suite. • [SLOW TEST:20.500 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1257,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:32:18.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 8 21:32:18.720: INFO: Waiting up to 5m0s for pod "pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6" in namespace "emptydir-388" to be "success or failure" Jan 8 21:32:18.730: INFO: Pod "pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.768088ms Jan 8 21:32:20.735: INFO: Pod "pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014492216s Jan 8 21:32:22.744: INFO: Pod "pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024379168s Jan 8 21:32:24.751: INFO: Pod "pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030702998s Jan 8 21:32:26.767: INFO: Pod "pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047345141s Jan 8 21:32:28.772: INFO: Pod "pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051931803s STEP: Saw pod success Jan 8 21:32:28.772: INFO: Pod "pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6" satisfied condition "success or failure" Jan 8 21:32:28.774: INFO: Trying to get logs from node jerma-node pod pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6 container test-container: STEP: delete the pod Jan 8 21:32:28.874: INFO: Waiting for pod pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6 to disappear Jan 8 21:32:28.882: INFO: Pod pod-d5ceb35a-98a2-4152-ba35-182829f8d0a6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:32:28.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-388" for this suite. • [SLOW TEST:10.407 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:32:28.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 8 21:32:29.495: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 8 21:32:31.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:32:33.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 8 21:32:35.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714115949, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 8 21:32:38.584: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API Jan 8 21:32:40.763: INFO: Waiting for webhook configuration to be ready... STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:32:51.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4260" for this suite. STEP: Destroying namespace "webhook-4260-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.280 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":71,"skipped":1290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:32:51.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 8 21:32:51.297: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a" in namespace "projected-6183" to be "success or failure" Jan 8 21:32:51.328: INFO: Pod "downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.573113ms Jan 8 21:32:53.333: INFO: Pod "downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0361588s Jan 8 21:32:55.341: INFO: Pod "downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043693499s Jan 8 21:32:57.351: INFO: Pod "downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05371892s Jan 8 21:32:59.359: INFO: Pod "downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061931158s Jan 8 21:33:01.369: INFO: Pod "downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071930483s STEP: Saw pod success Jan 8 21:33:01.369: INFO: Pod "downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a" satisfied condition "success or failure" Jan 8 21:33:01.381: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a container client-container: STEP: delete the pod Jan 8 21:33:01.521: INFO: Waiting for pod downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a to disappear Jan 8 21:33:01.528: INFO: Pod downwardapi-volume-32aa5ac0-7c23-47f3-abfc-52776febca7a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:33:01.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6183" for this suite. • [SLOW TEST:10.367 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1342,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:33:01.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:33:12.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8243" for this suite. • [SLOW TEST:11.249 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":73,"skipped":1342,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:33:12.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:33:12.929: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f8680daf-ee30-4a23-a312-b815bc7b075d" in namespace "security-context-test-7617" to be "success or failure" Jan 8 21:33:12.969: INFO: Pod "busybox-readonly-false-f8680daf-ee30-4a23-a312-b815bc7b075d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.631846ms Jan 8 21:33:14.976: INFO: Pod "busybox-readonly-false-f8680daf-ee30-4a23-a312-b815bc7b075d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046637527s Jan 8 21:33:16.985: INFO: Pod "busybox-readonly-false-f8680daf-ee30-4a23-a312-b815bc7b075d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055319147s Jan 8 21:33:18.992: INFO: Pod "busybox-readonly-false-f8680daf-ee30-4a23-a312-b815bc7b075d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062638543s Jan 8 21:33:21.000: INFO: Pod "busybox-readonly-false-f8680daf-ee30-4a23-a312-b815bc7b075d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070891391s Jan 8 21:33:21.000: INFO: Pod "busybox-readonly-false-f8680daf-ee30-4a23-a312-b815bc7b075d" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:33:21.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7617" for this suite. • [SLOW TEST:8.222 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1350,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:33:21.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jan 8 21:33:21.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 8 21:33:21.327: INFO: stderr: "" Jan 8 21:33:21.327: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:33:21.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4922" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":75,"skipped":1357,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:33:21.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-437a6f9a-dba2-4efd-820f-9166f2554396 STEP: Creating a pod to test consume configMaps Jan 8 21:33:21.468: INFO: Waiting up to 5m0s for pod "pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e" in namespace "configmap-5010" to be "success or failure" Jan 8 21:33:21.511: INFO: Pod "pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e": Phase="Pending", Reason="", readiness=false. Elapsed: 42.573822ms Jan 8 21:33:23.518: INFO: Pod "pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049553964s Jan 8 21:33:25.525: INFO: Pod "pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056672532s Jan 8 21:33:27.536: INFO: Pod "pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067329712s Jan 8 21:33:29.547: INFO: Pod "pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078416238s STEP: Saw pod success Jan 8 21:33:29.547: INFO: Pod "pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e" satisfied condition "success or failure" Jan 8 21:33:29.554: INFO: Trying to get logs from node jerma-node pod pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e container configmap-volume-test: STEP: delete the pod Jan 8 21:33:29.690: INFO: Waiting for pod pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e to disappear Jan 8 21:33:29.696: INFO: Pod pod-configmaps-900eab68-ebab-4b2f-bf23-a9527ed2a44e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:33:29.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5010" for this suite. • [SLOW TEST:8.386 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1376,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:33:29.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4409 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4409 STEP: Creating statefulset with conflicting port in namespace statefulset-4409 STEP: Waiting until pod test-pod will start running in namespace statefulset-4409 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4409 Jan 8 21:33:37.966: INFO: Observed stateful pod in namespace: statefulset-4409, name: ss-0, uid: 54c95b46-7df1-4588-9e2a-5ead5b78b913, status phase: Pending. Waiting for statefulset controller to delete. Jan 8 21:33:38.511: INFO: Observed stateful pod in namespace: statefulset-4409, name: ss-0, uid: 54c95b46-7df1-4588-9e2a-5ead5b78b913, status phase: Failed. Waiting for statefulset controller to delete. Jan 8 21:33:38.541: INFO: Observed stateful pod in namespace: statefulset-4409, name: ss-0, uid: 54c95b46-7df1-4588-9e2a-5ead5b78b913, status phase: Failed. Waiting for statefulset controller to delete. Jan 8 21:33:38.567: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4409 STEP: Removing pod with conflicting port in namespace statefulset-4409 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4409 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 8 21:33:46.770: INFO: Deleting all statefulset in ns statefulset-4409 Jan 8 21:33:46.775: INFO: Scaling statefulset ss to 0 Jan 8 21:34:06.800: INFO: Waiting for statefulset status.replicas updated to 0 Jan 8 21:34:06.808: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:34:06.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4409" for this suite. • [SLOW TEST:37.162 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":77,"skipped":1394,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:34:06.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Jan 8 21:34:06.994: INFO: Waiting up to 5m0s for pod "var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65" in namespace "var-expansion-6880" to be "success or failure" Jan 8 21:34:07.022: INFO: Pod "var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65": Phase="Pending", Reason="", readiness=false. Elapsed: 27.535419ms Jan 8 21:34:09.032: INFO: Pod "var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037792919s Jan 8 21:34:11.038: INFO: Pod "var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043095991s Jan 8 21:34:13.045: INFO: Pod "var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050526471s Jan 8 21:34:15.051: INFO: Pod "var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056850982s STEP: Saw pod success Jan 8 21:34:15.052: INFO: Pod "var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65" satisfied condition "success or failure" Jan 8 21:34:15.056: INFO: Trying to get logs from node jerma-node pod var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65 container dapi-container: STEP: delete the pod Jan 8 21:34:15.136: INFO: Waiting for pod var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65 to disappear Jan 8 21:34:15.245: INFO: Pod var-expansion-9b45b92b-2235-41b5-8a24-4c1f5f2a8c65 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:34:15.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6880" for this suite. • [SLOW TEST:8.367 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1395,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:34:15.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-354 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-354 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-354 Jan 8 21:34:15.333: INFO: Found 0 stateful pods, waiting for 1 Jan 8 21:34:25.341: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 8 21:34:25.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-354 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 8 21:34:28.030: INFO: stderr: "I0108 21:34:27.715787 888 log.go:172] (0xc000784b00) (0xc0009d41e0) Create stream\nI0108 21:34:27.715998 888 log.go:172] (0xc000784b00) (0xc0009d41e0) Stream added, broadcasting: 1\nI0108 21:34:27.727326 888 log.go:172] (0xc000784b00) Reply frame received for 1\nI0108 21:34:27.727426 888 log.go:172] (0xc000784b00) (0xc000453360) Create stream\nI0108 21:34:27.727460 888 log.go:172] (0xc000784b00) (0xc000453360) Stream added, broadcasting: 3\nI0108 21:34:27.729314 888 log.go:172] (0xc000784b00) Reply frame received for 3\nI0108 21:34:27.729349 888 log.go:172] (0xc000784b00) (0xc000453400) Create stream\nI0108 21:34:27.729364 888 log.go:172] (0xc000784b00) (0xc000453400) Stream added, broadcasting: 5\nI0108 21:34:27.730996 888 log.go:172] (0xc000784b00) Reply frame received for 5\nI0108 21:34:27.853063 888 log.go:172] (0xc000784b00) Data frame received for 5\nI0108 21:34:27.853159 888 log.go:172] (0xc000453400) (5) Data frame handling\nI0108 21:34:27.853197 888 log.go:172] (0xc000453400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:34:27.922177 888 log.go:172] (0xc000784b00) Data frame received for 3\nI0108 21:34:27.922355 888 log.go:172] (0xc000453360) (3) Data frame handling\nI0108 21:34:27.922400 888 log.go:172] (0xc000453360) (3) Data frame sent\nI0108 21:34:28.016103 888 log.go:172] (0xc000784b00) Data frame received for 1\nI0108 21:34:28.016268 888 log.go:172] (0xc0009d41e0) (1) Data frame handling\nI0108 21:34:28.016298 888 log.go:172] (0xc0009d41e0) (1) Data frame sent\nI0108 21:34:28.017630 888 log.go:172] (0xc000784b00) (0xc0009d41e0) Stream removed, broadcasting: 1\nI0108 21:34:28.018942 888 log.go:172] (0xc000784b00) (0xc000453360) Stream removed, broadcasting: 3\nI0108 21:34:28.019184 888 log.go:172] (0xc000784b00) (0xc000453400) Stream removed, broadcasting: 5\nI0108 21:34:28.019360 888 log.go:172] (0xc000784b00) (0xc0009d41e0) Stream removed, broadcasting: 1\nI0108 21:34:28.019396 888 log.go:172] (0xc000784b00) (0xc000453360) Stream removed, broadcasting: 3\nI0108 21:34:28.019416 888 log.go:172] (0xc000784b00) (0xc000453400) Stream removed, broadcasting: 5\nI0108 21:34:28.019641 888 log.go:172] (0xc000784b00) Go away received\n" Jan 8 21:34:28.030: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 8 21:34:28.030: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 8 21:34:28.036: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 8 21:34:38.044: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 8 21:34:38.044: INFO: Waiting for statefulset status.replicas updated to 0 Jan 8 21:34:38.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999565s Jan 8 21:34:39.076: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989450891s Jan 8 21:34:40.083: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.9793582s Jan 8 21:34:41.089: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.973097953s Jan 8 21:34:42.098: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.966837087s Jan 8 21:34:43.104: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.958216795s Jan 8 21:34:44.110: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.951628678s Jan 8 21:34:45.118: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.946149733s Jan 8 21:34:46.126: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.937272925s Jan 8 21:34:47.134: INFO: Verifying statefulset ss doesn't scale past 1 for another 929.447947ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-354 Jan 8 21:34:48.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-354 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 8 21:34:48.623: INFO: stderr: "I0108 21:34:48.387652 918 log.go:172] (0xc000c124d0) (0xc000b181e0) Create stream\nI0108 21:34:48.387942 918 log.go:172] (0xc000c124d0) (0xc000b181e0) Stream added, broadcasting: 1\nI0108 21:34:48.391336 918 log.go:172] (0xc000c124d0) Reply frame received for 1\nI0108 21:34:48.391399 918 log.go:172] (0xc000c124d0) (0xc000689cc0) Create stream\nI0108 21:34:48.391413 918 log.go:172] (0xc000c124d0) (0xc000689cc0) Stream added, broadcasting: 3\nI0108 21:34:48.393013 918 log.go:172] (0xc000c124d0) Reply frame received for 3\nI0108 21:34:48.393036 918 log.go:172] (0xc000c124d0) (0xc000b18280) Create stream\nI0108 21:34:48.393042 918 log.go:172] (0xc000c124d0) (0xc000b18280) Stream added, broadcasting: 5\nI0108 21:34:48.394143 918 log.go:172] (0xc000c124d0) Reply frame received for 5\nI0108 21:34:48.482566 918 log.go:172] (0xc000c124d0) Data frame received for 3\nI0108 21:34:48.482859 918 log.go:172] (0xc000689cc0) (3) Data frame handling\nI0108 21:34:48.482947 918 log.go:172] (0xc000689cc0) (3) Data frame sent\nI0108 21:34:48.482957 918 log.go:172] (0xc000c124d0) Data frame received for 5\nI0108 21:34:48.482981 918 log.go:172] (0xc000b18280) (5) Data frame handling\nI0108 21:34:48.482997 918 log.go:172] (0xc000b18280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0108 21:34:48.602497 918 log.go:172] (0xc000c124d0) Data frame received for 1\nI0108 21:34:48.602677 918 log.go:172] (0xc000c124d0) (0xc000689cc0) Stream removed, broadcasting: 3\nI0108 21:34:48.602771 918 log.go:172] (0xc000b181e0) (1) Data frame handling\nI0108 21:34:48.602811 918 log.go:172] (0xc000b181e0) (1) Data frame sent\nI0108 21:34:48.602831 918 log.go:172] (0xc000c124d0) (0xc000b181e0) Stream removed, broadcasting: 1\nI0108 21:34:48.602896 918 log.go:172] (0xc000c124d0) (0xc000b18280) Stream removed, broadcasting: 5\nI0108 21:34:48.603021 918 log.go:172] (0xc000c124d0) Go away received\nI0108 21:34:48.604874 918 log.go:172] (0xc000c124d0) (0xc000b181e0) Stream removed, broadcasting: 1\nI0108 21:34:48.604899 918 log.go:172] (0xc000c124d0) (0xc000689cc0) Stream removed, broadcasting: 3\nI0108 21:34:48.604908 918 log.go:172] (0xc000c124d0) (0xc000b18280) Stream removed, broadcasting: 5\n" Jan 8 21:34:48.623: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 8 21:34:48.623: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 8 21:34:48.631: INFO: Found 1 stateful pods, waiting for 3 Jan 8 21:34:58.648: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 8 21:34:58.649: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 8 21:34:58.649: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 8 21:35:08.640: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 8 21:35:08.640: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 8 21:35:08.640: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 8 21:35:08.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-354 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 8 21:35:09.080: INFO: stderr: "I0108 21:35:08.900478 939 log.go:172] (0xc000114bb0) (0xc000bd00a0) Create stream\nI0108 21:35:08.900749 939 log.go:172] (0xc000114bb0) (0xc000bd00a0) Stream added, broadcasting: 1\nI0108 21:35:08.906576 939 log.go:172] (0xc000114bb0) Reply frame received for 1\nI0108 21:35:08.906690 939 log.go:172] (0xc000114bb0) (0xc0005fdc20) Create stream\nI0108 21:35:08.906708 939 log.go:172] (0xc000114bb0) (0xc0005fdc20) Stream added, broadcasting: 3\nI0108 21:35:08.908610 939 log.go:172] (0xc000114bb0) Reply frame received for 3\nI0108 21:35:08.908646 939 log.go:172] (0xc000114bb0) (0xc0005fdcc0) Create stream\nI0108 21:35:08.908657 939 log.go:172] (0xc000114bb0) (0xc0005fdcc0) Stream added, broadcasting: 5\nI0108 21:35:08.909940 939 log.go:172] (0xc000114bb0) Reply frame received for 5\nI0108 21:35:08.986875 939 log.go:172] (0xc000114bb0) Data frame received for 5\nI0108 21:35:08.986941 939 log.go:172] (0xc0005fdcc0) (5) Data frame handling\nI0108 21:35:08.986973 939 log.go:172] (0xc0005fdcc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:35:08.988806 939 log.go:172] (0xc000114bb0) Data frame received for 3\nI0108 21:35:08.988853 939 log.go:172] (0xc0005fdc20) (3) Data frame handling\nI0108 21:35:08.988893 939 log.go:172] (0xc0005fdc20) (3) Data frame sent\nI0108 21:35:09.069475 939 log.go:172] (0xc000114bb0) Data frame received for 1\nI0108 21:35:09.069540 939 log.go:172] (0xc000bd00a0) (1) Data frame handling\nI0108 21:35:09.069555 939 log.go:172] (0xc000bd00a0) (1) Data frame sent\nI0108 21:35:09.069575 939 log.go:172] (0xc000114bb0) (0xc000bd00a0) Stream removed, broadcasting: 1\nI0108 21:35:09.070809 939 log.go:172] (0xc000114bb0) (0xc0005fdc20) Stream removed, broadcasting: 3\nI0108 21:35:09.071221 939 log.go:172] (0xc000114bb0) (0xc0005fdcc0) Stream removed, broadcasting: 5\nI0108 21:35:09.071378 939 log.go:172] (0xc000114bb0) (0xc000bd00a0) Stream removed, broadcasting: 1\nI0108 21:35:09.071396 939 log.go:172] (0xc000114bb0) (0xc0005fdc20) Stream removed, broadcasting: 3\nI0108 21:35:09.071421 939 log.go:172] (0xc000114bb0) (0xc0005fdcc0) Stream removed, broadcasting: 5\n" Jan 8 21:35:09.080: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 8 21:35:09.080: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 8 21:35:09.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-354 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 8 21:35:09.535: INFO: stderr: "I0108 21:35:09.260409 960 log.go:172] (0xc00001c160) (0xc00065ba40) Create stream\nI0108 21:35:09.260632 960 log.go:172] (0xc00001c160) (0xc00065ba40) Stream added, broadcasting: 1\nI0108 21:35:09.264151 960 log.go:172] (0xc00001c160) Reply frame received for 1\nI0108 21:35:09.264178 960 log.go:172] (0xc00001c160) (0xc0007d0000) Create stream\nI0108 21:35:09.264186 960 log.go:172] (0xc00001c160) (0xc0007d0000) Stream added, broadcasting: 3\nI0108 21:35:09.268132 960 log.go:172] (0xc00001c160) Reply frame received for 3\nI0108 21:35:09.268305 960 log.go:172] (0xc00001c160) (0xc000290000) Create stream\nI0108 21:35:09.268321 960 log.go:172] (0xc00001c160) (0xc000290000) Stream added, broadcasting: 5\nI0108 21:35:09.269648 960 log.go:172] (0xc00001c160) Reply frame received for 5\nI0108 21:35:09.362080 960 log.go:172] (0xc00001c160) Data frame received for 5\nI0108 21:35:09.362253 960 log.go:172] (0xc000290000) (5) Data frame handling\nI0108 21:35:09.362302 960 log.go:172] (0xc000290000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:35:09.389615 960 log.go:172] (0xc00001c160) Data frame received for 3\nI0108 21:35:09.389664 960 log.go:172] (0xc0007d0000) (3) Data frame handling\nI0108 21:35:09.389693 960 log.go:172] (0xc0007d0000) (3) Data frame sent\nI0108 21:35:09.521262 960 log.go:172] (0xc00001c160) (0xc0007d0000) Stream removed, broadcasting: 3\nI0108 21:35:09.521473 960 log.go:172] (0xc00001c160) Data frame received for 1\nI0108 21:35:09.521518 960 log.go:172] (0xc00065ba40) (1) Data frame handling\nI0108 21:35:09.521582 960 log.go:172] (0xc00065ba40) (1) Data frame sent\nI0108 21:35:09.521717 960 log.go:172] (0xc00001c160) (0xc000290000) Stream removed, broadcasting: 5\nI0108 21:35:09.521803 960 log.go:172] (0xc00001c160) (0xc00065ba40) Stream removed, broadcasting: 1\nI0108 21:35:09.521843 960 log.go:172] (0xc00001c160) Go away received\nI0108 21:35:09.523596 960 log.go:172] (0xc00001c160) (0xc00065ba40) Stream removed, broadcasting: 1\nI0108 21:35:09.523625 960 log.go:172] (0xc00001c160) (0xc0007d0000) Stream removed, broadcasting: 3\nI0108 21:35:09.523643 960 log.go:172] (0xc00001c160) (0xc000290000) Stream removed, broadcasting: 5\n" Jan 8 21:35:09.535: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 8 21:35:09.535: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 8 21:35:09.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-354 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 8 21:35:10.001: INFO: stderr: "I0108 21:35:09.756034 982 log.go:172] (0xc0009866e0) (0xc0006d1e00) Create stream\nI0108 21:35:09.756307 982 log.go:172] (0xc0009866e0) (0xc0006d1e00) Stream added, broadcasting: 1\nI0108 21:35:09.760349 982 log.go:172] (0xc0009866e0) Reply frame received for 1\nI0108 21:35:09.760446 982 log.go:172] (0xc0009866e0) (0xc00091a000) Create stream\nI0108 21:35:09.760458 982 log.go:172] (0xc0009866e0) (0xc00091a000) Stream added, broadcasting: 3\nI0108 21:35:09.761848 982 log.go:172] (0xc0009866e0) Reply frame received for 3\nI0108 21:35:09.761871 982 log.go:172] (0xc0009866e0) (0xc0006d1ea0) Create stream\nI0108 21:35:09.761883 982 log.go:172] (0xc0009866e0) (0xc0006d1ea0) Stream added, broadcasting: 5\nI0108 21:35:09.763711 982 log.go:172] (0xc0009866e0) Reply frame received for 5\nI0108 21:35:09.833996 982 log.go:172] (0xc0009866e0) Data frame received for 5\nI0108 21:35:09.834082 982 log.go:172] (0xc0006d1ea0) (5) Data frame handling\nI0108 21:35:09.834128 982 log.go:172] (0xc0006d1ea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:35:09.854657 982 log.go:172] (0xc0009866e0) Data frame received for 3\nI0108 21:35:09.854688 982 log.go:172] (0xc00091a000) (3) Data frame handling\nI0108 21:35:09.854711 982 log.go:172] (0xc00091a000) (3) Data frame sent\nI0108 21:35:09.973285 982 log.go:172] (0xc0009866e0) Data frame received for 1\nI0108 21:35:09.974256 982 log.go:172] (0xc0006d1e00) (1) Data frame handling\nI0108 21:35:09.974442 982 log.go:172] (0xc0006d1e00) (1) Data frame sent\nI0108 21:35:09.974516 982 log.go:172] (0xc0009866e0) (0xc0006d1ea0) Stream removed, broadcasting: 5\nI0108 21:35:09.974897 982 log.go:172] (0xc0009866e0) (0xc00091a000) Stream removed, broadcasting: 3\nI0108 21:35:09.975119 982 log.go:172] (0xc0009866e0) (0xc0006d1e00) Stream removed, broadcasting: 1\nI0108 21:35:09.976132 982 log.go:172] (0xc0009866e0) Go away received\nI0108 21:35:09.978275 982 log.go:172] (0xc0009866e0) (0xc0006d1e00) Stream removed, broadcasting: 1\nI0108 21:35:09.978334 982 log.go:172] (0xc0009866e0) (0xc00091a000) Stream removed, broadcasting: 3\nI0108 21:35:09.978345 982 log.go:172] (0xc0009866e0) (0xc0006d1ea0) Stream removed, broadcasting: 5\n" Jan 8 21:35:10.001: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 8 21:35:10.001: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 8 21:35:10.001: INFO: Waiting for statefulset status.replicas updated to 0 Jan 8 21:35:10.007: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 8 21:35:20.016: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 8 21:35:20.016: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 8 21:35:20.016: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 8 21:35:20.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999488s Jan 8 21:35:21.035: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994335833s Jan 8 21:35:22.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988175929s Jan 8 21:35:23.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.914830498s Jan 8 21:35:24.132: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.90411693s Jan 8 21:35:25.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.890848715s Jan 8 21:35:26.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.885752697s Jan 8 21:35:27.155: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.875406243s Jan 8 21:35:28.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.868650687s Jan 8 21:35:29.170: INFO: Verifying statefulset ss doesn't scale past 3 for another 860.558684ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-354 Jan 8 21:35:30.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-354 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 8 21:35:30.750: INFO: stderr: "I0108 21:35:30.422616 1002 log.go:172] (0xc0009d22c0) (0xc0009ca140) Create stream\nI0108 21:35:30.422809 1002 log.go:172] (0xc0009d22c0) (0xc0009ca140) Stream added, broadcasting: 1\nI0108 21:35:30.427105 1002 log.go:172] (0xc0009d22c0) Reply frame received for 1\nI0108 21:35:30.427207 1002 log.go:172] (0xc0009d22c0) (0xc0006afd60) Create stream\nI0108 21:35:30.427227 1002 log.go:172] (0xc0009d22c0) (0xc0006afd60) Stream added, broadcasting: 3\nI0108 21:35:30.428559 1002 log.go:172] (0xc0009d22c0) Reply frame received for 3\nI0108 21:35:30.428584 1002 log.go:172] (0xc0009d22c0) (0xc0009ca1e0) Create stream\nI0108 21:35:30.428591 1002 log.go:172] (0xc0009d22c0) (0xc0009ca1e0) Stream added, broadcasting: 5\nI0108 21:35:30.432790 1002 log.go:172] (0xc0009d22c0) Reply frame received for 5\nI0108 21:35:30.571742 1002 log.go:172] (0xc0009d22c0) Data frame received for 3\nI0108 21:35:30.572498 1002 log.go:172] (0xc0006afd60) (3) Data frame handling\nI0108 21:35:30.572698 1002 log.go:172] (0xc0006afd60) (3) Data frame sent\nI0108 21:35:30.573619 1002 log.go:172] (0xc0009d22c0) Data frame received for 5\nI0108 21:35:30.573738 1002 log.go:172] (0xc0009ca1e0) (5) Data frame handling\nI0108 21:35:30.573904 1002 log.go:172] (0xc0009ca1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0108 21:35:30.716036 1002 log.go:172] (0xc0009d22c0) Data frame received for 1\nI0108 21:35:30.717461 1002 log.go:172] (0xc0009d22c0) (0xc0006afd60) Stream removed, broadcasting: 3\nI0108 21:35:30.717687 1002 log.go:172] (0xc0009ca140) (1) Data frame handling\nI0108 21:35:30.717790 1002 log.go:172] (0xc0009ca140) (1) Data frame sent\nI0108 21:35:30.717810 1002 log.go:172] (0xc0009d22c0) (0xc0009ca1e0) Stream removed, broadcasting: 5\nI0108 21:35:30.717902 1002 log.go:172] (0xc0009d22c0) (0xc0009ca140) Stream removed, broadcasting: 1\nI0108 21:35:30.717953 1002 log.go:172] (0xc0009d22c0) Go away received\nI0108 21:35:30.720219 1002 log.go:172] (0xc0009d22c0) (0xc0009ca140) Stream removed, broadcasting: 1\nI0108 21:35:30.720234 1002 log.go:172] (0xc0009d22c0) (0xc0006afd60) Stream removed, broadcasting: 3\nI0108 21:35:30.720240 1002 log.go:172] (0xc0009d22c0) (0xc0009ca1e0) Stream removed, broadcasting: 5\n" Jan 8 21:35:30.750: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 8 21:35:30.750: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 8 21:35:30.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-354 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 8 21:35:31.298: INFO: stderr: "I0108 21:35:31.077281 1017 log.go:172] (0xc0000ed810) (0xc00055de00) Create stream\nI0108 21:35:31.077757 1017 log.go:172] (0xc0000ed810) (0xc00055de00) Stream added, broadcasting: 1\nI0108 21:35:31.084266 1017 log.go:172] (0xc0000ed810) Reply frame received for 1\nI0108 21:35:31.084507 1017 log.go:172] (0xc0000ed810) (0xc0007ba000) Create stream\nI0108 21:35:31.084529 1017 log.go:172] (0xc0000ed810) (0xc0007ba000) Stream added, broadcasting: 3\nI0108 21:35:31.086673 1017 log.go:172] (0xc0000ed810) Reply frame received for 3\nI0108 21:35:31.086728 1017 log.go:172] (0xc0000ed810) (0xc0007c8000) Create stream\nI0108 21:35:31.086742 1017 log.go:172] (0xc0000ed810) (0xc0007c8000) Stream added, broadcasting: 5\nI0108 21:35:31.088312 1017 log.go:172] (0xc0000ed810) Reply frame received for 5\nI0108 21:35:31.186391 1017 log.go:172] (0xc0000ed810) Data frame received for 3\nI0108 21:35:31.186543 1017 log.go:172] (0xc0007ba000) (3) Data frame handling\nI0108 21:35:31.186590 1017 log.go:172] (0xc0007ba000) (3) Data frame sent\nI0108 21:35:31.186637 1017 log.go:172] (0xc0000ed810) Data frame received for 5\nI0108 21:35:31.186643 1017 log.go:172] (0xc0007c8000) (5) Data frame handling\nI0108 21:35:31.186647 1017 log.go:172] (0xc0007c8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0108 21:35:31.291956 1017 log.go:172] (0xc0000ed810) Data frame received for 1\nI0108 21:35:31.292020 1017 log.go:172] (0xc0000ed810) (0xc0007ba000) Stream removed, broadcasting: 3\nI0108 21:35:31.292065 1017 log.go:172] (0xc00055de00) (1) Data frame handling\nI0108 21:35:31.292080 1017 log.go:172] (0xc00055de00) (1) Data frame sent\nI0108 21:35:31.292100 1017 log.go:172] (0xc0000ed810) (0xc0007c8000) Stream removed, broadcasting: 5\nI0108 21:35:31.292117 1017 log.go:172] (0xc0000ed810) (0xc00055de00) Stream removed, broadcasting: 1\nI0108 21:35:31.292128 1017 log.go:172] (0xc0000ed810) Go away received\nI0108 21:35:31.292609 1017 log.go:172] (0xc0000ed810) (0xc00055de00) Stream removed, broadcasting: 1\nI0108 21:35:31.292618 1017 log.go:172] (0xc0000ed810) (0xc0007ba000) Stream removed, broadcasting: 3\nI0108 21:35:31.292621 1017 log.go:172] (0xc0000ed810) (0xc0007c8000) Stream removed, broadcasting: 5\n" Jan 8 21:35:31.298: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 8 21:35:31.298: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 8 21:35:31.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 8 21:35:31.822: INFO: stderr: "I0108 21:35:31.678354 1030 log.go:172] (0xc000a874a0) (0xc000a5a5a0) Create stream\nI0108 21:35:31.678745 1030 log.go:172] (0xc000a874a0) (0xc000a5a5a0) Stream added, broadcasting: 1\nI0108 21:35:31.681530 1030 log.go:172] (0xc000a874a0) Reply frame received for 1\nI0108 21:35:31.681564 1030 log.go:172] (0xc000a874a0) (0xc000b7c320) Create stream\nI0108 21:35:31.681573 1030 log.go:172] (0xc000a874a0) (0xc000b7c320) Stream added, broadcasting: 3\nI0108 21:35:31.682370 1030 log.go:172] (0xc000a874a0) Reply frame received for 3\nI0108 21:35:31.682399 1030 log.go:172] (0xc000a874a0) (0xc000a5a640) Create stream\nI0108 21:35:31.682410 1030 log.go:172] (0xc000a874a0) (0xc000a5a640) Stream added, broadcasting: 5\nI0108 21:35:31.683661 1030 log.go:172] (0xc000a874a0) Reply frame received for 5\nI0108 21:35:31.739833 1030 log.go:172] (0xc000a874a0) Data frame received for 5\nI0108 21:35:31.739957 1030 log.go:172] (0xc000a5a640) (5) Data frame handling\nI0108 21:35:31.739994 1030 log.go:172] (0xc000a5a640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0108 21:35:31.740034 1030 log.go:172] (0xc000a874a0) Data frame received for 3\nI0108 21:35:31.740043 1030 log.go:172] (0xc000b7c320) (3) Data frame handling\nI0108 21:35:31.740051 1030 log.go:172] (0xc000b7c320) (3) Data frame sent\nI0108 21:35:31.811423 1030 log.go:172] (0xc000a874a0) Data frame received for 1\nI0108 21:35:31.811583 1030 log.go:172] (0xc000a874a0) (0xc000b7c320) Stream removed, broadcasting: 3\nI0108 21:35:31.811684 1030 log.go:172] (0xc000a5a5a0) (1) Data frame handling\nI0108 21:35:31.811720 1030 log.go:172] (0xc000a5a5a0) (1) Data frame sent\nI0108 21:35:31.811900 1030 log.go:172] (0xc000a874a0) (0xc000a5a640) Stream removed, broadcasting: 5\nI0108 21:35:31.811986 1030 log.go:172] (0xc000a874a0) (0xc000a5a5a0) Stream removed, broadcasting: 1\nI0108 21:35:31.812028 1030 log.go:172] (0xc000a874a0) Go away received\nI0108 21:35:31.813318 1030 log.go:172] (0xc000a874a0) (0xc000a5a5a0) Stream removed, broadcasting: 1\nI0108 21:35:31.813331 1030 log.go:172] (0xc000a874a0) (0xc000b7c320) Stream removed, broadcasting: 3\nI0108 21:35:31.813341 1030 log.go:172] (0xc000a874a0) (0xc000a5a640) Stream removed, broadcasting: 5\n" Jan 8 21:35:31.822: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 8 21:35:31.822: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 8 21:35:31.822: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 8 21:36:11.842: INFO: Deleting all statefulset in ns statefulset-354 Jan 8 21:36:11.847: INFO: Scaling statefulset ss to 0 Jan 8 21:36:11.869: INFO: Waiting for statefulset status.replicas updated to 0 Jan 8 21:36:11.872: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:36:11.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-354" for this suite. • [SLOW TEST:116.739 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":79,"skipped":1395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:36:11.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:36:12.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9605" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":80,"skipped":1418,"failed":0} S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:36:12.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 8 21:36:12.444: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 8 21:36:17.560: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:36:17.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6356" for this suite. • [SLOW TEST:5.520 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":81,"skipped":1419,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:36:17.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:36:29.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7397" for this suite. • [SLOW TEST:11.498 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":82,"skipped":1421,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:36:29.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 STEP: creating an pod Jan 8 21:36:29.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8255 -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 8 21:36:29.692: INFO: stderr: "" Jan 8 21:36:29.692: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jan 8 21:36:29.692: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 8 21:36:29.692: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8255" to be "running and ready, or succeeded" Jan 8 21:36:29.707: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 14.471674ms Jan 8 21:36:31.717: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024650911s Jan 8 21:36:33.723: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030960715s Jan 8 21:36:35.730: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.037955572s Jan 8 21:36:35.730: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 8 21:36:35.730: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 8 21:36:35.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8255' Jan 8 21:36:35.971: INFO: stderr: "" Jan 8 21:36:35.971: INFO: stdout: "I0108 21:36:34.805717 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/htwz 495\nI0108 21:36:35.005807 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/qgx 212\nI0108 21:36:35.206079 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/8lct 566\nI0108 21:36:35.405992 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/qpc 418\nI0108 21:36:35.606371 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/s5l 405\nI0108 21:36:35.806156 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/fzlg 280\n" STEP: limiting log lines Jan 8 21:36:35.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8255 --tail=1' Jan 8 21:36:36.074: INFO: stderr: "" Jan 8 21:36:36.074: INFO: stdout: "I0108 21:36:36.005940 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/pzs 343\n" Jan 8 21:36:36.074: INFO: got output "I0108 21:36:36.005940 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/pzs 343\n" STEP: limiting log bytes Jan 8 21:36:36.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8255 --limit-bytes=1' Jan 8 21:36:36.205: INFO: stderr: "" Jan 8 21:36:36.205: INFO: stdout: "I" Jan 8 21:36:36.205: INFO: got output "I" STEP: exposing timestamps Jan 8 21:36:36.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8255 --tail=1 --timestamps' Jan 8 21:36:36.386: INFO: stderr: "" Jan 8 21:36:36.386: INFO: stdout: "2020-01-08T21:36:36.206356805Z I0108 21:36:36.205946 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/89h 428\n" Jan 8 21:36:36.386: INFO: got output "2020-01-08T21:36:36.206356805Z I0108 21:36:36.205946 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/89h 428\n" STEP: restricting to a time range Jan 8 21:36:38.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8255 --since=1s' Jan 8 21:36:39.142: INFO: stderr: "" Jan 8 21:36:39.142: INFO: stdout: "I0108 21:36:38.205987 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/sb8 518\nI0108 21:36:38.406213 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/vr4g 505\nI0108 21:36:38.606185 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/x6m 295\nI0108 21:36:38.806151 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/7hx 337\nI0108 21:36:39.006349 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/bwc 266\n" Jan 8 21:36:39.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8255 --since=24h' Jan 8 21:36:39.311: INFO: stderr: "" Jan 8 21:36:39.311: INFO: stdout: "I0108 21:36:34.805717 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/htwz 495\nI0108 21:36:35.005807 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/qgx 212\nI0108 21:36:35.206079 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/8lct 566\nI0108 21:36:35.405992 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/qpc 418\nI0108 21:36:35.606371 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/s5l 405\nI0108 21:36:35.806156 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/fzlg 280\nI0108 21:36:36.005940 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/pzs 343\nI0108 21:36:36.205946 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/89h 428\nI0108 21:36:36.405885 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/tl26 375\nI0108 21:36:36.606280 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/85qb 272\nI0108 21:36:36.806267 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/c748 343\nI0108 21:36:37.006128 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/8fd 528\nI0108 21:36:37.206127 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/dkb 350\nI0108 21:36:37.406029 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/2c8 205\nI0108 21:36:37.605943 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/pfnq 445\nI0108 21:36:37.806010 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/qflt 323\nI0108 21:36:38.005983 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/t5s6 418\nI0108 21:36:38.205987 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/sb8 518\nI0108 21:36:38.406213 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/vr4g 505\nI0108 21:36:38.606185 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/x6m 295\nI0108 21:36:38.806151 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/7hx 337\nI0108 21:36:39.006349 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/bwc 266\nI0108 21:36:39.206020 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/sjh5 264\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 Jan 8 21:36:39.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8255' Jan 8 21:36:52.380: INFO: stderr: "" Jan 8 21:36:52.380: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:36:52.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8255" for this suite. • [SLOW TEST:23.095 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":83,"skipped":1431,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:36:52.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 8 21:36:52.511: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 8 21:36:52.523: INFO: Waiting for terminating namespaces to be deleted... Jan 8 21:36:52.526: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 8 21:36:52.541: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 8 21:36:52.541: INFO: Container kube-proxy ready: true, restart count 0 Jan 8 21:36:52.541: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 8 21:36:52.541: INFO: Container weave ready: true, restart count 1 Jan 8 21:36:52.541: INFO: Container weave-npc ready: true, restart count 0 Jan 8 21:36:52.541: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 8 21:36:52.595: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 8 21:36:52.595: INFO: Container kube-apiserver ready: true, restart count 1 Jan 8 21:36:52.595: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 8 21:36:52.595: INFO: Container etcd ready: true, restart count 1 Jan 8 21:36:52.595: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 8 21:36:52.595: INFO: Container coredns ready: true, restart count 0 Jan 8 21:36:52.595: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 8 21:36:52.595: INFO: Container coredns ready: true, restart count 0 Jan 8 21:36:52.595: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 8 21:36:52.595: INFO: Container kube-controller-manager ready: true, restart count 1 Jan 8 21:36:52.595: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 8 21:36:52.595: INFO: Container kube-proxy ready: true, restart count 0 Jan 8 21:36:52.595: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 8 21:36:52.595: INFO: Container weave ready: true, restart count 0 Jan 8 21:36:52.595: INFO: Container weave-npc ready: true, restart count 0 Jan 8 21:36:52.595: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 8 21:36:52.595: INFO: Container kube-scheduler ready: true, restart count 2 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cb7684a7-a4e5-4244-8bae-d37979e9cdb9 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-cb7684a7-a4e5-4244-8bae-d37979e9cdb9 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-cb7684a7-a4e5-4244-8bae-d37979e9cdb9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:37:06.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3348" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:14.491 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":84,"skipped":1444,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:37:06.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-3f9bcf96-a398-4042-92ea-9b74e0bad744 STEP: Creating a pod to test consume secrets Jan 8 21:37:07.082: INFO: Waiting up to 5m0s for pod "pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb" in namespace "secrets-3103" to be "success or failure" Jan 8 21:37:07.091: INFO: Pod "pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.439656ms Jan 8 21:37:09.098: INFO: Pod "pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016180076s Jan 8 21:37:11.105: INFO: Pod "pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023795706s Jan 8 21:37:13.111: INFO: Pod "pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029364227s Jan 8 21:37:15.120: INFO: Pod "pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.03840729s STEP: Saw pod success Jan 8 21:37:15.120: INFO: Pod "pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb" satisfied condition "success or failure" Jan 8 21:37:15.124: INFO: Trying to get logs from node jerma-node pod pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb container secret-volume-test: STEP: delete the pod Jan 8 21:37:15.292: INFO: Waiting for pod pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb to disappear Jan 8 21:37:15.297: INFO: Pod pod-secrets-c2647081-0c2a-473f-b4d3-6ef7f1015ccb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:37:15.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3103" for this suite. • [SLOW TEST:8.424 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1445,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:37:15.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-499dd869-cd9a-4968-80e5-81b5314f545c STEP: Creating a pod to test consume secrets Jan 8 21:37:15.496: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef" in namespace "projected-1702" to be "success or failure" Jan 8 21:37:15.511: INFO: Pod "pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef": Phase="Pending", Reason="", readiness=false. Elapsed: 14.996092ms Jan 8 21:37:17.524: INFO: Pod "pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027836852s Jan 8 21:37:19.533: INFO: Pod "pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036812204s Jan 8 21:37:21.548: INFO: Pod "pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051864847s Jan 8 21:37:23.557: INFO: Pod "pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061374522s STEP: Saw pod success Jan 8 21:37:23.558: INFO: Pod "pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef" satisfied condition "success or failure" Jan 8 21:37:23.563: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef container projected-secret-volume-test: STEP: delete the pod Jan 8 21:37:23.897: INFO: Waiting for pod pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef to disappear Jan 8 21:37:23.958: INFO: Pod pod-projected-secrets-7d2af0d3-b72e-462a-a76a-94d08e1511ef no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:37:23.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1702" for this suite. • [SLOW TEST:8.657 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1445,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:37:23.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:37:32.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3223" for this suite. • [SLOW TEST:8.257 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:37:32.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-232271bb-8678-42bc-8fdf-c034315092f9 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:37:42.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8581" for this suite. • [SLOW TEST:10.248 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1476,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:37:42.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-fbc0945d-0c47-42a2-9ea0-fb29bef4120f STEP: Creating a pod to test consume secrets Jan 8 21:37:42.634: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390" in namespace "projected-5992" to be "success or failure" Jan 8 21:37:42.644: INFO: Pod "pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390": Phase="Pending", Reason="", readiness=false. Elapsed: 9.455405ms Jan 8 21:37:44.652: INFO: Pod "pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01752951s Jan 8 21:37:46.665: INFO: Pod "pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030457799s Jan 8 21:37:48.671: INFO: Pod "pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036371877s Jan 8 21:37:50.691: INFO: Pod "pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057152031s STEP: Saw pod success Jan 8 21:37:50.692: INFO: Pod "pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390" satisfied condition "success or failure" Jan 8 21:37:50.737: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390 container projected-secret-volume-test: STEP: delete the pod Jan 8 21:37:50.768: INFO: Waiting for pod pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390 to disappear Jan 8 21:37:50.797: INFO: Pod pod-projected-secrets-d72fc95a-d564-4807-9e8c-1a1d62eab390 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:37:50.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5992" for this suite. • [SLOW TEST:8.325 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1479,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:37:50.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 8 21:37:50.975: INFO: Waiting up to 5m0s for pod "pod-d133a222-1be7-4d39-8b6c-e3f014289fe4" in namespace "emptydir-6755" to be "success or failure" Jan 8 21:37:51.027: INFO: Pod "pod-d133a222-1be7-4d39-8b6c-e3f014289fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 51.339865ms Jan 8 21:37:53.038: INFO: Pod "pod-d133a222-1be7-4d39-8b6c-e3f014289fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062775531s Jan 8 21:37:55.042: INFO: Pod "pod-d133a222-1be7-4d39-8b6c-e3f014289fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066923174s Jan 8 21:37:57.055: INFO: Pod "pod-d133a222-1be7-4d39-8b6c-e3f014289fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080220795s Jan 8 21:37:59.088: INFO: Pod "pod-d133a222-1be7-4d39-8b6c-e3f014289fe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112365394s STEP: Saw pod success Jan 8 21:37:59.088: INFO: Pod "pod-d133a222-1be7-4d39-8b6c-e3f014289fe4" satisfied condition "success or failure" Jan 8 21:37:59.092: INFO: Trying to get logs from node jerma-node pod pod-d133a222-1be7-4d39-8b6c-e3f014289fe4 container test-container: STEP: delete the pod Jan 8 21:37:59.126: INFO: Waiting for pod pod-d133a222-1be7-4d39-8b6c-e3f014289fe4 to disappear Jan 8 21:37:59.131: INFO: Pod pod-d133a222-1be7-4d39-8b6c-e3f014289fe4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:37:59.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6755" for this suite. • [SLOW TEST:8.338 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1482,"failed":0} SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:37:59.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-98a23db7-5a85-4411-8b43-60af690650d8 STEP: Creating secret with name s-test-opt-upd-2fbea5e8-caad-4e93-a062-9d800f809b58 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-98a23db7-5a85-4411-8b43-60af690650d8 STEP: Updating secret s-test-opt-upd-2fbea5e8-caad-4e93-a062-9d800f809b58 STEP: Creating secret with name s-test-opt-create-b50a5ab6-123c-43d1-8ae7-27ad62f2b134 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:38:11.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3628" for this suite. • [SLOW TEST:12.599 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1486,"failed":0} S ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:38:11.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:38:37.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-82" for this suite. • [SLOW TEST:26.185 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":92,"skipped":1487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:38:37.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-514cf3b5-81e1-4915-90bd-e35e46cfae98 STEP: Creating a pod to test consume configMaps Jan 8 21:38:38.117: INFO: Waiting up to 5m0s for pod "pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe" in namespace "configmap-7997" to be "success or failure" Jan 8 21:38:38.203: INFO: Pod "pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe": Phase="Pending", Reason="", readiness=false. Elapsed: 85.265258ms Jan 8 21:38:40.207: INFO: Pod "pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089228127s Jan 8 21:38:42.213: INFO: Pod "pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095578894s Jan 8 21:38:44.295: INFO: Pod "pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177543307s Jan 8 21:38:46.303: INFO: Pod "pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18603866s Jan 8 21:38:48.315: INFO: Pod "pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197764701s STEP: Saw pod success Jan 8 21:38:48.315: INFO: Pod "pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe" satisfied condition "success or failure" Jan 8 21:38:48.323: INFO: Trying to get logs from node jerma-node pod pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe container configmap-volume-test: STEP: delete the pod Jan 8 21:38:48.375: INFO: Waiting for pod pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe to disappear Jan 8 21:38:48.379: INFO: Pod pod-configmaps-896c898d-f2d1-4feb-9b0f-71c499de09fe no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 8 21:38:48.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7997" for this suite. • [SLOW TEST:10.463 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1520,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 8 21:38:48.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 8 21:38:48.533: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/:
alternatives.log
apt/
... (200; 17.843741ms)
Jan  8 21:38:48.540: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.60827ms)
Jan  8 21:38:48.545: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.685842ms)
Jan  8 21:38:48.548: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.485754ms)
Jan  8 21:38:48.552: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.303429ms)
Jan  8 21:38:48.555: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.535383ms)
Jan  8 21:38:48.558: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.941017ms)
Jan  8 21:38:48.561: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.162093ms)
Jan  8 21:38:48.564: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.909167ms)
Jan  8 21:38:48.567: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.755782ms)
Jan  8 21:38:48.570: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.322026ms)
Jan  8 21:38:48.573: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.891902ms)
Jan  8 21:38:48.639: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 65.390802ms)
Jan  8 21:38:48.644: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.595422ms)
Jan  8 21:38:48.663: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 18.220224ms)
Jan  8 21:38:48.681: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 17.991239ms)
Jan  8 21:38:48.685: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.419631ms)
Jan  8 21:38:48.691: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.80907ms)
Jan  8 21:38:48.696: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.442445ms)
Jan  8 21:38:48.699: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.263086ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:38:48.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3611" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":94,"skipped":1541,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:38:48.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-pshg
STEP: Creating a pod to test atomic-volume-subpath
Jan  8 21:38:48.799: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pshg" in namespace "subpath-6679" to be "success or failure"
Jan  8 21:38:48.833: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Pending", Reason="", readiness=false. Elapsed: 33.825028ms
Jan  8 21:38:50.841: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042306406s
Jan  8 21:38:52.896: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097003721s
Jan  8 21:38:55.003: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204236005s
Jan  8 21:38:57.011: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 8.211794914s
Jan  8 21:38:59.027: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 10.228000998s
Jan  8 21:39:01.034: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 12.235528713s
Jan  8 21:39:03.040: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 14.240642645s
Jan  8 21:39:05.045: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 16.24638882s
Jan  8 21:39:07.053: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 18.253741737s
Jan  8 21:39:09.069: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 20.270055146s
Jan  8 21:39:11.077: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 22.277821078s
Jan  8 21:39:13.083: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 24.28446597s
Jan  8 21:39:15.092: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Running", Reason="", readiness=true. Elapsed: 26.293478142s
Jan  8 21:39:17.099: INFO: Pod "pod-subpath-test-configmap-pshg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.299685408s
STEP: Saw pod success
Jan  8 21:39:17.099: INFO: Pod "pod-subpath-test-configmap-pshg" satisfied condition "success or failure"
Jan  8 21:39:17.102: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-pshg container test-container-subpath-configmap-pshg: 
STEP: delete the pod
Jan  8 21:39:17.187: INFO: Waiting for pod pod-subpath-test-configmap-pshg to disappear
Jan  8 21:39:17.259: INFO: Pod pod-subpath-test-configmap-pshg no longer exists
STEP: Deleting pod pod-subpath-test-configmap-pshg
Jan  8 21:39:17.260: INFO: Deleting pod "pod-subpath-test-configmap-pshg" in namespace "subpath-6679"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:39:17.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6679" for this suite.

• [SLOW TEST:28.643 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":95,"skipped":1545,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:39:17.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 21:39:18.565: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 21:39:20.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:39:22.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:39:24.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116358, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 21:39:27.631: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:39:27.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1008" for this suite.
STEP: Destroying namespace "webhook-1008-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.583 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":96,"skipped":1567,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:39:27.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan  8 21:39:28.023: INFO: Created pod &Pod{ObjectMeta:{dns-796  dns-796 /api/v1/namespaces/dns-796/pods/dns-796 dd4350e0-7c6a-4269-864b-33480aecd7a8 890100 0 2020-01-08 21:39:28 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kt7jj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kt7jj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kt7jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jan  8 21:39:40.040: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-796 PodName:dns-796 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 21:39:40.040: INFO: >>> kubeConfig: /root/.kube/config
I0108 21:39:40.093992       9 log.go:172] (0xc000ac68f0) (0xc0028e08c0) Create stream
I0108 21:39:40.094047       9 log.go:172] (0xc000ac68f0) (0xc0028e08c0) Stream added, broadcasting: 1
I0108 21:39:40.098924       9 log.go:172] (0xc000ac68f0) Reply frame received for 1
I0108 21:39:40.098966       9 log.go:172] (0xc000ac68f0) (0xc00172c140) Create stream
I0108 21:39:40.098979       9 log.go:172] (0xc000ac68f0) (0xc00172c140) Stream added, broadcasting: 3
I0108 21:39:40.103621       9 log.go:172] (0xc000ac68f0) Reply frame received for 3
I0108 21:39:40.103737       9 log.go:172] (0xc000ac68f0) (0xc0028e0a00) Create stream
I0108 21:39:40.103757       9 log.go:172] (0xc000ac68f0) (0xc0028e0a00) Stream added, broadcasting: 5
I0108 21:39:40.106125       9 log.go:172] (0xc000ac68f0) Reply frame received for 5
I0108 21:39:40.204470       9 log.go:172] (0xc000ac68f0) Data frame received for 3
I0108 21:39:40.204551       9 log.go:172] (0xc00172c140) (3) Data frame handling
I0108 21:39:40.204577       9 log.go:172] (0xc00172c140) (3) Data frame sent
I0108 21:39:40.273886       9 log.go:172] (0xc000ac68f0) (0xc00172c140) Stream removed, broadcasting: 3
I0108 21:39:40.274081       9 log.go:172] (0xc000ac68f0) Data frame received for 1
I0108 21:39:40.274129       9 log.go:172] (0xc000ac68f0) (0xc0028e0a00) Stream removed, broadcasting: 5
I0108 21:39:40.274174       9 log.go:172] (0xc0028e08c0) (1) Data frame handling
I0108 21:39:40.274192       9 log.go:172] (0xc0028e08c0) (1) Data frame sent
I0108 21:39:40.274204       9 log.go:172] (0xc000ac68f0) (0xc0028e08c0) Stream removed, broadcasting: 1
I0108 21:39:40.274241       9 log.go:172] (0xc000ac68f0) Go away received
I0108 21:39:40.275178       9 log.go:172] (0xc000ac68f0) (0xc0028e08c0) Stream removed, broadcasting: 1
I0108 21:39:40.275228       9 log.go:172] (0xc000ac68f0) (0xc00172c140) Stream removed, broadcasting: 3
I0108 21:39:40.275245       9 log.go:172] (0xc000ac68f0) (0xc0028e0a00) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan  8 21:39:40.275: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-796 PodName:dns-796 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 21:39:40.275: INFO: >>> kubeConfig: /root/.kube/config
I0108 21:39:40.337708       9 log.go:172] (0xc002ad5b80) (0xc00115f860) Create stream
I0108 21:39:40.337909       9 log.go:172] (0xc002ad5b80) (0xc00115f860) Stream added, broadcasting: 1
I0108 21:39:40.347938       9 log.go:172] (0xc002ad5b80) Reply frame received for 1
I0108 21:39:40.348003       9 log.go:172] (0xc002ad5b80) (0xc0018a3680) Create stream
I0108 21:39:40.348016       9 log.go:172] (0xc002ad5b80) (0xc0018a3680) Stream added, broadcasting: 3
I0108 21:39:40.349950       9 log.go:172] (0xc002ad5b80) Reply frame received for 3
I0108 21:39:40.349994       9 log.go:172] (0xc002ad5b80) (0xc00172c280) Create stream
I0108 21:39:40.350030       9 log.go:172] (0xc002ad5b80) (0xc00172c280) Stream added, broadcasting: 5
I0108 21:39:40.353513       9 log.go:172] (0xc002ad5b80) Reply frame received for 5
I0108 21:39:40.447575       9 log.go:172] (0xc002ad5b80) Data frame received for 3
I0108 21:39:40.447700       9 log.go:172] (0xc0018a3680) (3) Data frame handling
I0108 21:39:40.447730       9 log.go:172] (0xc0018a3680) (3) Data frame sent
I0108 21:39:40.556475       9 log.go:172] (0xc002ad5b80) (0xc00172c280) Stream removed, broadcasting: 5
I0108 21:39:40.556598       9 log.go:172] (0xc002ad5b80) Data frame received for 1
I0108 21:39:40.556617       9 log.go:172] (0xc00115f860) (1) Data frame handling
I0108 21:39:40.556642       9 log.go:172] (0xc00115f860) (1) Data frame sent
I0108 21:39:40.556732       9 log.go:172] (0xc002ad5b80) (0xc0018a3680) Stream removed, broadcasting: 3
I0108 21:39:40.556783       9 log.go:172] (0xc002ad5b80) (0xc00115f860) Stream removed, broadcasting: 1
I0108 21:39:40.556796       9 log.go:172] (0xc002ad5b80) Go away received
I0108 21:39:40.557150       9 log.go:172] (0xc002ad5b80) (0xc00115f860) Stream removed, broadcasting: 1
I0108 21:39:40.557196       9 log.go:172] (0xc002ad5b80) (0xc0018a3680) Stream removed, broadcasting: 3
I0108 21:39:40.557214       9 log.go:172] (0xc002ad5b80) (0xc00172c280) Stream removed, broadcasting: 5
Jan  8 21:39:40.557: INFO: Deleting pod dns-796...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:39:40.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-796" for this suite.

• [SLOW TEST:12.709 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":97,"skipped":1574,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:39:40.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:39:53.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7091" for this suite.

• [SLOW TEST:13.234 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":98,"skipped":1575,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:39:53.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9425
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-9425
STEP: creating replication controller externalsvc in namespace services-9425
I0108 21:39:54.169016       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9425, replica count: 2
I0108 21:39:57.224463       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 21:40:00.224947       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 21:40:03.225662       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jan  8 21:40:03.290: INFO: Creating new exec pod
Jan  8 21:40:11.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9425 execpodbhs9z -- /bin/sh -x -c nslookup clusterip-service'
Jan  8 21:40:11.812: INFO: stderr: "I0108 21:40:11.600429    1210 log.go:172] (0xc00099e840) (0xc0008b6500) Create stream\nI0108 21:40:11.600645    1210 log.go:172] (0xc00099e840) (0xc0008b6500) Stream added, broadcasting: 1\nI0108 21:40:11.607788    1210 log.go:172] (0xc00099e840) Reply frame received for 1\nI0108 21:40:11.607841    1210 log.go:172] (0xc00099e840) (0xc000686640) Create stream\nI0108 21:40:11.607850    1210 log.go:172] (0xc00099e840) (0xc000686640) Stream added, broadcasting: 3\nI0108 21:40:11.609497    1210 log.go:172] (0xc00099e840) Reply frame received for 3\nI0108 21:40:11.609540    1210 log.go:172] (0xc00099e840) (0xc0002b7400) Create stream\nI0108 21:40:11.609556    1210 log.go:172] (0xc00099e840) (0xc0002b7400) Stream added, broadcasting: 5\nI0108 21:40:11.611315    1210 log.go:172] (0xc00099e840) Reply frame received for 5\nI0108 21:40:11.697694    1210 log.go:172] (0xc00099e840) Data frame received for 5\nI0108 21:40:11.697962    1210 log.go:172] (0xc0002b7400) (5) Data frame handling\nI0108 21:40:11.698021    1210 log.go:172] (0xc0002b7400) (5) Data frame sent\n+ nslookup clusterip-service\nI0108 21:40:11.720167    1210 log.go:172] (0xc00099e840) Data frame received for 3\nI0108 21:40:11.720276    1210 log.go:172] (0xc000686640) (3) Data frame handling\nI0108 21:40:11.720317    1210 log.go:172] (0xc000686640) (3) Data frame sent\nI0108 21:40:11.725276    1210 log.go:172] (0xc00099e840) Data frame received for 3\nI0108 21:40:11.725489    1210 log.go:172] (0xc000686640) (3) Data frame handling\nI0108 21:40:11.725571    1210 log.go:172] (0xc000686640) (3) Data frame sent\nI0108 21:40:11.802404    1210 log.go:172] (0xc00099e840) Data frame received for 1\nI0108 21:40:11.802611    1210 log.go:172] (0xc0008b6500) (1) Data frame handling\nI0108 21:40:11.802649    1210 log.go:172] (0xc0008b6500) (1) Data frame sent\nI0108 21:40:11.803198    1210 log.go:172] (0xc00099e840) (0xc0002b7400) Stream removed, broadcasting: 5\nI0108 21:40:11.803273    1210 log.go:172] (0xc00099e840) (0xc0008b6500) Stream removed, broadcasting: 1\nI0108 21:40:11.803820    1210 log.go:172] (0xc00099e840) (0xc000686640) Stream removed, broadcasting: 3\nI0108 21:40:11.803852    1210 log.go:172] (0xc00099e840) (0xc0008b6500) Stream removed, broadcasting: 1\nI0108 21:40:11.803863    1210 log.go:172] (0xc00099e840) (0xc000686640) Stream removed, broadcasting: 3\nI0108 21:40:11.803871    1210 log.go:172] (0xc00099e840) (0xc0002b7400) Stream removed, broadcasting: 5\n"
Jan  8 21:40:11.812: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9425.svc.cluster.local\tcanonical name = externalsvc.services-9425.svc.cluster.local.\nName:\texternalsvc.services-9425.svc.cluster.local\nAddress: 10.96.131.194\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-9425, will wait for the garbage collector to delete the pods
Jan  8 21:40:11.879: INFO: Deleting ReplicationController externalsvc took: 9.700321ms
Jan  8 21:40:12.179: INFO: Terminating ReplicationController externalsvc pods took: 300.618676ms
Jan  8 21:40:22.431: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:40:22.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9425" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:28.603 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":99,"skipped":1579,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:40:22.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:40:33.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4735" for this suite.

• [SLOW TEST:11.306 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":100,"skipped":1580,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:40:33.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 21:40:33.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801" in namespace "projected-5456" to be "success or failure"
Jan  8 21:40:34.010: INFO: Pod "downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801": Phase="Pending", Reason="", readiness=false. Elapsed: 12.100926ms
Jan  8 21:40:36.017: INFO: Pod "downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018795693s
Jan  8 21:40:38.024: INFO: Pod "downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025140617s
Jan  8 21:40:40.030: INFO: Pod "downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031602927s
Jan  8 21:40:42.036: INFO: Pod "downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037635824s
STEP: Saw pod success
Jan  8 21:40:42.036: INFO: Pod "downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801" satisfied condition "success or failure"
Jan  8 21:40:42.041: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801 container client-container: 
STEP: delete the pod
Jan  8 21:40:42.271: INFO: Waiting for pod downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801 to disappear
Jan  8 21:40:42.278: INFO: Pod downwardapi-volume-9f420b9d-42d3-4700-8107-3aeabfec5801 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:40:42.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5456" for this suite.

• [SLOW TEST:8.520 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1585,"failed":0}
SSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:40:42.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jan  8 21:40:53.021: INFO: Successfully updated pod "adopt-release-g8khd"
STEP: Checking that the Job readopts the Pod
Jan  8 21:40:53.021: INFO: Waiting up to 15m0s for pod "adopt-release-g8khd" in namespace "job-9539" to be "adopted"
Jan  8 21:40:53.038: INFO: Pod "adopt-release-g8khd": Phase="Running", Reason="", readiness=true. Elapsed: 16.598112ms
Jan  8 21:40:55.044: INFO: Pod "adopt-release-g8khd": Phase="Running", Reason="", readiness=true. Elapsed: 2.022475981s
Jan  8 21:40:55.044: INFO: Pod "adopt-release-g8khd" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jan  8 21:40:55.558: INFO: Successfully updated pod "adopt-release-g8khd"
STEP: Checking that the Job releases the Pod
Jan  8 21:40:55.558: INFO: Waiting up to 15m0s for pod "adopt-release-g8khd" in namespace "job-9539" to be "released"
Jan  8 21:40:55.589: INFO: Pod "adopt-release-g8khd": Phase="Running", Reason="", readiness=true. Elapsed: 30.817401ms
Jan  8 21:40:57.602: INFO: Pod "adopt-release-g8khd": Phase="Running", Reason="", readiness=true. Elapsed: 2.043590458s
Jan  8 21:40:57.602: INFO: Pod "adopt-release-g8khd" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:40:57.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9539" for this suite.

• [SLOW TEST:15.300 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":102,"skipped":1589,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:40:57.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan  8 21:40:57.743: INFO: PodSpec: initContainers in spec.initContainers
Jan  8 21:41:55.168: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-262dc0b8-5f8d-4521-9205-2cb1bb2b2d34", GenerateName:"", Namespace:"init-container-7834", SelfLink:"/api/v1/namespaces/init-container-7834/pods/pod-init-262dc0b8-5f8d-4521-9205-2cb1bb2b2d34", UID:"da1dad14-bec2-4f0d-bbb2-31b35ee15c2f", ResourceVersion:"890731", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714116457, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"743614599"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qk8qf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004971500), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qk8qf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qk8qf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qk8qf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003f50668), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023335c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003f506f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003f50710)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003f50718), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003f5071c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116457, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116457, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116457, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714116457, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.4"}}, StartTime:(*v1.Time)(0xc0016b31a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00048c070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00048c0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://6dfb6a59370f53e2a45d040ad7b7100032b60b28b8b2e2bafa30ff802b0492be", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0016b31e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0016b31c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003f5093f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:41:55.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7834" for this suite.

• [SLOW TEST:57.564 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":103,"skipped":1665,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:41:55.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:41:55.295: INFO: Creating deployment "webserver-deployment"
Jan  8 21:41:55.302: INFO: Waiting for observed generation 1
Jan  8 21:41:57.326: INFO: Waiting for all required pods to come up
Jan  8 21:41:57.333: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  8 21:42:19.353: INFO: Waiting for deployment "webserver-deployment" to complete
Jan  8 21:42:19.363: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan  8 21:42:19.373: INFO: Updating deployment webserver-deployment
Jan  8 21:42:19.373: INFO: Waiting for observed generation 2
Jan  8 21:42:22.546: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  8 21:42:22.556: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  8 21:42:22.854: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan  8 21:42:23.591: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  8 21:42:23.591: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  8 21:42:23.602: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan  8 21:42:23.611: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan  8 21:42:23.611: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan  8 21:42:23.624: INFO: Updating deployment webserver-deployment
Jan  8 21:42:23.624: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan  8 21:42:24.188: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  8 21:42:24.714: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan  8 21:42:30.565: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5825 /apis/apps/v1/namespaces/deployment-5825/deployments/webserver-deployment 220bf5c4-1d6c-4e55-b966-c43e37e4b951 890982 3 2020-01-08 21:41:55 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041bcc58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-08 21:42:20 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-08 21:42:24 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan  8 21:42:32.593: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-5825 /apis/apps/v1/namespaces/deployment-5825/replicasets/webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 891049 3 2020-01-08 21:42:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 220bf5c4-1d6c-4e55-b966-c43e37e4b951 0xc004188927 0xc004188928}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004188998  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan  8 21:42:32.593: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan  8 21:42:32.593: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-5825 /apis/apps/v1/namespaces/deployment-5825/replicasets/webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 891056 3 2020-01-08 21:41:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 220bf5c4-1d6c-4e55-b966-c43e37e4b951 0xc004188867 0xc004188868}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041888c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan  8 21:42:34.073: INFO: Pod "webserver-deployment-595b5b9587-2c9sz" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2c9sz webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-2c9sz 33659b14-b97a-4238-86a9-b7f06224e772 890872 0 2020-01-08 21:41:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004188e87 0xc004188e88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-08 21:41:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:42:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e33239c31cd36534706a1120fd70189f7555b23c2b85cdc042fab8f56d44796e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.073: INFO: Pod "webserver-deployment-595b5b9587-2fhk9" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2fhk9 webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-2fhk9 cef15fe4-f0b8-45cb-83ef-5910e335e721 890895 0 2020-01-08 21:41:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189000 0xc004189001}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-08 21:41:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:42:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e9cdae998b8487b65b3130a1c2c56ef62211f5708c0fa9fc0413cdc49d1d15bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.074: INFO: Pod "webserver-deployment-595b5b9587-4vhjt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4vhjt webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-4vhjt a7cb971d-b6d6-4b5e-92d5-450a21dc8fb9 891035 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189160 0xc004189161}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-08 21:42:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.074: INFO: Pod "webserver-deployment-595b5b9587-5d2qn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5d2qn webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-5d2qn 4ea3241b-cdb1-4a67-b693-0ab9c91662d5 891057 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc0041892c7 0xc0041892c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-08 21:42:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.074: INFO: Pod "webserver-deployment-595b5b9587-6brsx" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6brsx webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-6brsx 9bb55aa6-3d70-45d4-bc24-434fb31b2770 891050 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189427 0xc004189428}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-08 21:42:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.074: INFO: Pod "webserver-deployment-595b5b9587-9hhzw" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9hhzw webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-9hhzw 076bd4d2-6206-4fbe-bb9a-cedc159207e2 890901 0 2020-01-08 21:41:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189577 0xc004189578}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-08 21:41:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:42:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0199e735395c894a5ad8f3be73fe5d7a3dedc5ea1afcca3353429e921e8dca18,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.075: INFO: Pod "webserver-deployment-595b5b9587-bhz8m" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bhz8m webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-bhz8m 2b8dd5da-b9d9-438a-836f-5ab9b283407b 890868 0 2020-01-08 21:41:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc0041896e0 0xc0041896e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-08 21:41:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:42:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0a3eb0e2e648593baa234c0e77e963bedb9bf5860bc26865ac4f9dbedbf0f1a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.075: INFO: Pod "webserver-deployment-595b5b9587-bjr78" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bjr78 webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-bjr78 abadef69-9809-45b2-80b2-5e2350ae07b3 891029 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189850 0xc004189851}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.075: INFO: Pod "webserver-deployment-595b5b9587-czf2v" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-czf2v webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-czf2v ef3daf88-854e-48b2-94c1-35be69082f54 891005 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189950 0xc004189951}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.075: INFO: Pod "webserver-deployment-595b5b9587-j7c2r" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-j7c2r webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-j7c2r 9cbb0750-8965-4a35-ac68-48f8b3bd5e86 890994 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189a50 0xc004189a51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.075: INFO: Pod "webserver-deployment-595b5b9587-jn2wp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jn2wp webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-jn2wp 3aea74d7-a794-439c-a361-9e55c6cca58f 891022 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189b50 0xc004189b51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.076: INFO: Pod "webserver-deployment-595b5b9587-k2lns" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-k2lns webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-k2lns ec42068a-c93f-4cc0-82a5-6d0e3e59c819 890861 0 2020-01-08 21:41:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189c70 0xc004189c71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-08 21:41:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:42:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://810f311e260ec5f43bde926f28c2923f45baad73605946501000b9edc34b8e17,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.076: INFO: Pod "webserver-deployment-595b5b9587-m7jjl" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-m7jjl webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-m7jjl 13021dfa-d834-40f4-8474-706a7b9eed10 891025 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189df0 0xc004189df1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.077: INFO: Pod "webserver-deployment-595b5b9587-qfqfl" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qfqfl webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-qfqfl b83666d3-0823-434c-9ba6-4ba67ad3ea21 890879 0 2020-01-08 21:41:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc004189f20 0xc004189f21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-08 21:41:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:42:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://17bb3d42140f984b858ac566bad368bfade2719eec75642460682d9827e8b8a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.077: INFO: Pod "webserver-deployment-595b5b9587-s5mfz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-s5mfz webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-s5mfz 7d33b5f0-d641-440d-91f4-2396ea437f9b 891021 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc0040b8180 0xc0040b8181}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.077: INFO: Pod "webserver-deployment-595b5b9587-svlw7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-svlw7 webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-svlw7 5a547f2d-08dd-400e-bdbd-e1ea03efca89 891024 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc0040b8280 0xc0040b8281}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.077: INFO: Pod "webserver-deployment-595b5b9587-wvglq" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wvglq webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-wvglq 6a91c37c-0e4e-4550-a22a-fbd7dd43d52a 890904 0 2020-01-08 21:41:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc0040b8380 0xc0040b8381}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-08 21:41:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:42:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4b180d526d287cdc6ba2a77fc88f2406b086ea2a4e5482db7700bc1e8b6c8f52,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.077: INFO: Pod "webserver-deployment-595b5b9587-wwqwb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wwqwb webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-wwqwb 8d4a4320-2827-414f-83da-ca8a3e7e1429 890995 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc0040b84e0 0xc0040b84e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.078: INFO: Pod "webserver-deployment-595b5b9587-xf9fx" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xf9fx webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-xf9fx d3b59fe1-0631-4bf6-b487-3008e60c9cfd 890875 0 2020-01-08 21:41:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc0040b8600 0xc0040b8601}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:41:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-08 21:41:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:42:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://929f5fd529436e3cdda6f8b547f4efea654940ee7d85ea4ea5e636c6a3d7cc5d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.078: INFO: Pod "webserver-deployment-595b5b9587-zsx97" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zsx97 webserver-deployment-595b5b9587- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-595b5b9587-zsx97 c7449039-23dc-42f2-8f41-f0bb75e1be0c 891031 0 2020-01-08 21:42:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a86ce0e6-bbed-4c52-9b08-f0a700c7f7a6 0xc0040b8770 0xc0040b8771}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-08 21:42:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.078: INFO: Pod "webserver-deployment-c7997dcc8-8hdtz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8hdtz webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-8hdtz 15f2581e-7e4f-4f22-9106-b8c00df00848 890962 0 2020-01-08 21:42:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b88b7 0xc0040b88b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-08 21:42:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.079: INFO: Pod "webserver-deployment-c7997dcc8-cnq9c" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cnq9c webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-cnq9c c1bd5251-95e2-4c8a-af7f-463bd29c3c03 891010 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b8a20 0xc0040b8a21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.079: INFO: Pod "webserver-deployment-c7997dcc8-dwltc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dwltc webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-dwltc 87f13c66-7bdd-4f84-8467-a9db504f51e7 890996 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b8b30 0xc0040b8b31}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.079: INFO: Pod "webserver-deployment-c7997dcc8-g8bdw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g8bdw webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-g8bdw 72b00d6e-009c-4c5b-b946-aa159e1dbd83 890934 0 2020-01-08 21:42:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b8c50 0xc0040b8c51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-08 21:42:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.080: INFO: Pod "webserver-deployment-c7997dcc8-lf622" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lf622 webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-lf622 2ef30561-9b33-4ddf-8e8d-232aa6beee00 891019 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b8dc0 0xc0040b8dc1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.081: INFO: Pod "webserver-deployment-c7997dcc8-ltklz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ltklz webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-ltklz 7ffd306e-acff-49a5-b425-241437489bad 891018 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b8ee0 0xc0040b8ee1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.081: INFO: Pod "webserver-deployment-c7997dcc8-p85gn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p85gn webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-p85gn 48bb0cb3-79e8-4767-affa-e487b2c92b8e 890944 0 2020-01-08 21:42:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b9000 0xc0040b9001}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-08 21:42:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.082: INFO: Pod "webserver-deployment-c7997dcc8-qpndm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qpndm webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-qpndm bf28db7c-5cbd-478b-9694-1e98b454cd9b 891030 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b9180 0xc0040b9181}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.082: INFO: Pod "webserver-deployment-c7997dcc8-qq79g" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qq79g webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-qq79g 85236be2-c091-45e7-8f96-7894e3548a7d 891020 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b9290 0xc0040b9291}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.082: INFO: Pod "webserver-deployment-c7997dcc8-rdksw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rdksw webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-rdksw 83c509cc-3f56-4f25-9f3e-580c119805a9 890959 0 2020-01-08 21:42:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b93d0 0xc0040b93d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-08 21:42:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.083: INFO: Pod "webserver-deployment-c7997dcc8-w8ltd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w8ltd webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-w8ltd 78684548-4e75-4da8-85fb-eb1ddb4cdd98 891017 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b9550 0xc0040b9551}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.083: INFO: Pod "webserver-deployment-c7997dcc8-wcnqq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wcnqq webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-wcnqq be5fad63-0fa1-44ad-8db1-8fe4aec6713c 891062 0 2020-01-08 21:42:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b96a0 0xc0040b96a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-08 21:42:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan  8 21:42:34.083: INFO: Pod "webserver-deployment-c7997dcc8-xn7rq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xn7rq webserver-deployment-c7997dcc8- deployment-5825 /api/v1/namespaces/deployment-5825/pods/webserver-deployment-c7997dcc8-xn7rq ab223469-de8f-48d7-9416-c7da2ed7fc8f 890938 0 2020-01-08 21:42:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 33b08c1c-4e94-4cc8-88cb-1bb8f728bf67 0xc0040b9800 0xc0040b9801}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wcws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wcws,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wcws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:42:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-08 21:42:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:42:34.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5825" for this suite.

• [SLOW TEST:41.972 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":104,"skipped":1670,"failed":0}
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:42:37.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-8138/secret-test-abe70227-7c4f-47ab-ac4a-dc9eca57429e
STEP: Creating a pod to test consume secrets
Jan  8 21:42:39.643: INFO: Waiting up to 5m0s for pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb" in namespace "secrets-8138" to be "success or failure"
Jan  8 21:42:39.936: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 292.93994ms
Jan  8 21:42:42.038: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394833048s
Jan  8 21:42:46.428: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.78531411s
Jan  8 21:42:50.818: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.17493151s
Jan  8 21:42:53.865: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.22157392s
Jan  8 21:42:57.096: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.452449606s
Jan  8 21:42:59.742: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.09929312s
Jan  8 21:43:01.861: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.217736972s
Jan  8 21:43:03.877: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.234241548s
Jan  8 21:43:06.226: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.583320528s
Jan  8 21:43:08.370: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.726476119s
Jan  8 21:43:10.535: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.891997786s
Jan  8 21:43:12.767: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.123506803s
Jan  8 21:43:14.890: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 35.247218283s
Jan  8 21:43:16.901: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 37.258372142s
Jan  8 21:43:18.911: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 39.267849958s
Jan  8 21:43:20.923: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.27949862s
STEP: Saw pod success
Jan  8 21:43:20.923: INFO: Pod "pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb" satisfied condition "success or failure"
Jan  8 21:43:20.929: INFO: Trying to get logs from node jerma-node pod pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb container env-test: 
STEP: delete the pod
Jan  8 21:43:20.978: INFO: Waiting for pod pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb to disappear
Jan  8 21:43:21.008: INFO: Pod pod-configmaps-621ad4a9-c0ec-4c82-8b46-7fce9da4e6bb no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:43:21.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8138" for this suite.

• [SLOW TEST:43.859 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1672,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:43:21.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:43:29.214: INFO: Waiting up to 5m0s for pod "client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4" in namespace "pods-8391" to be "success or failure"
Jan  8 21:43:29.346: INFO: Pod "client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4": Phase="Pending", Reason="", readiness=false. Elapsed: 132.337864ms
Jan  8 21:43:31.354: INFO: Pod "client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140002166s
Jan  8 21:43:33.364: INFO: Pod "client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150543325s
Jan  8 21:43:35.376: INFO: Pod "client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161697232s
Jan  8 21:43:37.383: INFO: Pod "client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.169089047s
STEP: Saw pod success
Jan  8 21:43:37.383: INFO: Pod "client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4" satisfied condition "success or failure"
Jan  8 21:43:37.386: INFO: Trying to get logs from node jerma-node pod client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4 container env3cont: 
STEP: delete the pod
Jan  8 21:43:37.418: INFO: Waiting for pod client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4 to disappear
Jan  8 21:43:37.446: INFO: Pod client-envvars-7e268d2a-baae-4c54-9de0-765dfa73bce4 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:43:37.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8391" for this suite.

• [SLOW TEST:16.435 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1704,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:43:37.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:43:37.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jan  8 21:43:40.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3695 create -f -'
Jan  8 21:43:43.741: INFO: stderr: ""
Jan  8 21:43:43.742: INFO: stdout: "e2e-test-crd-publish-openapi-6643-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan  8 21:43:43.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3695 delete e2e-test-crd-publish-openapi-6643-crds test-foo'
Jan  8 21:43:43.895: INFO: stderr: ""
Jan  8 21:43:43.895: INFO: stdout: "e2e-test-crd-publish-openapi-6643-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jan  8 21:43:43.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3695 apply -f -'
Jan  8 21:43:44.384: INFO: stderr: ""
Jan  8 21:43:44.384: INFO: stdout: "e2e-test-crd-publish-openapi-6643-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan  8 21:43:44.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3695 delete e2e-test-crd-publish-openapi-6643-crds test-foo'
Jan  8 21:43:44.547: INFO: stderr: ""
Jan  8 21:43:44.548: INFO: stdout: "e2e-test-crd-publish-openapi-6643-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jan  8 21:43:44.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3695 create -f -'
Jan  8 21:43:44.914: INFO: rc: 1
Jan  8 21:43:44.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3695 apply -f -'
Jan  8 21:43:45.275: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jan  8 21:43:45.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3695 create -f -'
Jan  8 21:43:45.590: INFO: rc: 1
Jan  8 21:43:45.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3695 apply -f -'
Jan  8 21:43:45.956: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jan  8 21:43:45.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6643-crds'
Jan  8 21:43:46.257: INFO: stderr: ""
Jan  8 21:43:46.257: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6643-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jan  8 21:43:46.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6643-crds.metadata'
Jan  8 21:43:46.754: INFO: stderr: ""
Jan  8 21:43:46.754: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6643-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jan  8 21:43:46.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6643-crds.spec'
Jan  8 21:43:47.179: INFO: stderr: ""
Jan  8 21:43:47.179: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6643-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan  8 21:43:47.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6643-crds.spec.bars'
Jan  8 21:43:47.502: INFO: stderr: ""
Jan  8 21:43:47.502: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6643-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan  8 21:43:47.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6643-crds.spec.bars2'
Jan  8 21:43:48.018: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:43:51.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3695" for this suite.

• [SLOW TEST:14.222 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":107,"skipped":1725,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:43:51.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:43:51.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  8 21:43:52.124: INFO: stderr: ""
Jan  8 21:43:52.125: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:43:52.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4730" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":108,"skipped":1767,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:43:52.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-3bb9304c-1381-4f4e-8073-bfb4b1382068
STEP: Creating a pod to test consume configMaps
Jan  8 21:43:52.269: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2" in namespace "projected-9833" to be "success or failure"
Jan  8 21:43:52.275: INFO: Pod "pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.595369ms
Jan  8 21:43:54.285: INFO: Pod "pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015247573s
Jan  8 21:43:56.293: INFO: Pod "pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02354153s
Jan  8 21:43:58.305: INFO: Pod "pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035876674s
Jan  8 21:44:00.321: INFO: Pod "pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052094628s
Jan  8 21:44:02.330: INFO: Pod "pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060292231s
STEP: Saw pod success
Jan  8 21:44:02.330: INFO: Pod "pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2" satisfied condition "success or failure"
Jan  8 21:44:02.336: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 21:44:02.421: INFO: Waiting for pod pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2 to disappear
Jan  8 21:44:02.425: INFO: Pod pod-projected-configmaps-0f61162d-33e9-4376-b652-319cd6ac89b2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:44:02.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9833" for this suite.

• [SLOW TEST:10.283 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1775,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:44:02.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-9vh2
STEP: Creating a pod to test atomic-volume-subpath
Jan  8 21:44:02.668: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9vh2" in namespace "subpath-9737" to be "success or failure"
Jan  8 21:44:02.673: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.673914ms
Jan  8 21:44:04.731: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062556472s
Jan  8 21:44:06.758: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089210659s
Jan  8 21:44:08.765: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 6.096638876s
Jan  8 21:44:10.794: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 8.125457403s
Jan  8 21:44:12.801: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 10.132249802s
Jan  8 21:44:14.809: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 12.139884574s
Jan  8 21:44:16.815: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 14.146637801s
Jan  8 21:44:18.826: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 16.156749628s
Jan  8 21:44:20.832: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 18.16361782s
Jan  8 21:44:22.847: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 20.178076815s
Jan  8 21:44:24.856: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 22.187297837s
Jan  8 21:44:26.864: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Running", Reason="", readiness=true. Elapsed: 24.194936306s
Jan  8 21:44:28.871: INFO: Pod "pod-subpath-test-secret-9vh2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.201999469s
STEP: Saw pod success
Jan  8 21:44:28.871: INFO: Pod "pod-subpath-test-secret-9vh2" satisfied condition "success or failure"
Jan  8 21:44:28.876: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-9vh2 container test-container-subpath-secret-9vh2: 
STEP: delete the pod
Jan  8 21:44:28.956: INFO: Waiting for pod pod-subpath-test-secret-9vh2 to disappear
Jan  8 21:44:28.962: INFO: Pod pod-subpath-test-secret-9vh2 no longer exists
STEP: Deleting pod pod-subpath-test-secret-9vh2
Jan  8 21:44:28.962: INFO: Deleting pod "pod-subpath-test-secret-9vh2" in namespace "subpath-9737"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:44:28.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9737" for this suite.

• [SLOW TEST:26.547 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":110,"skipped":1783,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:44:28.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-b04f93df-a7b5-4d4d-bf77-2a6723e8b3bb
STEP: Creating a pod to test consume configMaps
Jan  8 21:44:29.090: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61" in namespace "projected-1971" to be "success or failure"
Jan  8 21:44:29.128: INFO: Pod "pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61": Phase="Pending", Reason="", readiness=false. Elapsed: 38.321811ms
Jan  8 21:44:31.142: INFO: Pod "pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051357727s
Jan  8 21:44:33.149: INFO: Pod "pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059098959s
Jan  8 21:44:35.155: INFO: Pod "pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064894029s
Jan  8 21:44:37.164: INFO: Pod "pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073925725s
STEP: Saw pod success
Jan  8 21:44:37.164: INFO: Pod "pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61" satisfied condition "success or failure"
Jan  8 21:44:37.168: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 21:44:37.233: INFO: Waiting for pod pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61 to disappear
Jan  8 21:44:37.276: INFO: Pod pod-projected-configmaps-0fa04c53-e364-45de-9e7e-870c1f27ab61 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:44:37.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1971" for this suite.

• [SLOW TEST:8.302 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1790,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:44:37.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:44:37.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:44:43.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2026" for this suite.

• [SLOW TEST:6.440 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1809,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:44:43.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:44:43.865: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:44:44.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5160" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":113,"skipped":1835,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:44:44.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:44:44.582: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 11.905772ms)
Jan  8 21:44:44.593: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 11.070187ms)
Jan  8 21:44:44.650: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 56.75578ms)
Jan  8 21:44:44.655: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.52538ms)
Jan  8 21:44:44.660: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.121001ms)
Jan  8 21:44:44.665: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.744243ms)
Jan  8 21:44:44.669: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.39626ms)
Jan  8 21:44:44.674: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.821435ms)
Jan  8 21:44:44.679: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.540859ms)
Jan  8 21:44:44.684: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.302227ms)
Jan  8 21:44:44.690: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 6.384307ms)
Jan  8 21:44:44.695: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.125911ms)
Jan  8 21:44:44.698: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.219198ms)
Jan  8 21:44:44.701: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.032343ms)
Jan  8 21:44:44.704: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.264464ms)
Jan  8 21:44:44.707: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 2.752129ms)
Jan  8 21:44:44.710: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.347002ms)
Jan  8 21:44:44.714: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.557832ms)
Jan  8 21:44:44.717: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.215052ms)
Jan  8 21:44:44.721: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.979168ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:44:44.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8215" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":114,"skipped":1847,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:44:44.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Jan  8 21:44:44.809: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix364389893/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:44:44.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5634" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":115,"skipped":1871,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:44:44.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:44:45.136: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:44:45.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8156" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":116,"skipped":1879,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:44:45.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 21:44:58.079: INFO: DNS probes using dns-7383/dns-test-8e494c15-86a1-4726-ae42-cacb9177be20 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:44:58.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7383" for this suite.

• [SLOW TEST:12.462 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":117,"skipped":1899,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:44:58.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  8 21:45:14.945: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 21:45:15.143: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 21:45:17.143: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 21:45:17.150: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 21:45:19.143: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 21:45:19.155: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 21:45:21.143: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 21:45:21.152: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 21:45:23.143: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 21:45:23.148: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:45:23.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5424" for this suite.

• [SLOW TEST:24.850 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1900,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:45:23.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:45:23.302: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  8 21:45:23.341: INFO: Number of nodes with available pods: 0
Jan  8 21:45:23.341: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:24.703: INFO: Number of nodes with available pods: 0
Jan  8 21:45:24.703: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:25.353: INFO: Number of nodes with available pods: 0
Jan  8 21:45:25.353: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:26.359: INFO: Number of nodes with available pods: 0
Jan  8 21:45:26.359: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:28.911: INFO: Number of nodes with available pods: 0
Jan  8 21:45:28.911: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:30.468: INFO: Number of nodes with available pods: 0
Jan  8 21:45:30.468: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:31.406: INFO: Number of nodes with available pods: 0
Jan  8 21:45:31.406: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:32.374: INFO: Number of nodes with available pods: 1
Jan  8 21:45:32.374: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  8 21:45:33.404: INFO: Number of nodes with available pods: 2
Jan  8 21:45:33.404: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  8 21:45:33.446: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:33.446: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:34.468: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:34.468: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:35.487: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:35.487: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:36.467: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:36.467: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:37.784: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:37.784: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:38.468: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:38.468: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:39.468: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:39.468: INFO: Pod daemon-set-kcv2k is not available
Jan  8 21:45:39.468: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:40.467: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:40.467: INFO: Pod daemon-set-kcv2k is not available
Jan  8 21:45:40.467: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:41.462: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:41.462: INFO: Pod daemon-set-kcv2k is not available
Jan  8 21:45:41.462: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:42.464: INFO: Wrong image for pod: daemon-set-kcv2k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:42.465: INFO: Pod daemon-set-kcv2k is not available
Jan  8 21:45:42.465: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:43.464: INFO: Pod daemon-set-s2224 is not available
Jan  8 21:45:43.464: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:44.470: INFO: Pod daemon-set-s2224 is not available
Jan  8 21:45:44.470: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:45.465: INFO: Pod daemon-set-s2224 is not available
Jan  8 21:45:45.465: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:46.468: INFO: Pod daemon-set-s2224 is not available
Jan  8 21:45:46.468: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:47.882: INFO: Pod daemon-set-s2224 is not available
Jan  8 21:45:47.882: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:48.625: INFO: Pod daemon-set-s2224 is not available
Jan  8 21:45:48.625: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:49.465: INFO: Pod daemon-set-s2224 is not available
Jan  8 21:45:49.465: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:50.463: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:51.464: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:52.467: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:53.464: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:54.468: INFO: Wrong image for pod: daemon-set-sr9rv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan  8 21:45:54.468: INFO: Pod daemon-set-sr9rv is not available
Jan  8 21:45:55.463: INFO: Pod daemon-set-h8wmn is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  8 21:45:55.481: INFO: Number of nodes with available pods: 1
Jan  8 21:45:55.482: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:56.547: INFO: Number of nodes with available pods: 1
Jan  8 21:45:56.548: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:57.496: INFO: Number of nodes with available pods: 1
Jan  8 21:45:57.496: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:58.508: INFO: Number of nodes with available pods: 1
Jan  8 21:45:58.508: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:45:59.507: INFO: Number of nodes with available pods: 1
Jan  8 21:45:59.508: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:46:00.502: INFO: Number of nodes with available pods: 1
Jan  8 21:46:00.502: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:46:01.513: INFO: Number of nodes with available pods: 1
Jan  8 21:46:01.513: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:46:02.494: INFO: Number of nodes with available pods: 2
Jan  8 21:46:02.494: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3090, will wait for the garbage collector to delete the pods
Jan  8 21:46:02.629: INFO: Deleting DaemonSet.extensions daemon-set took: 12.735553ms
Jan  8 21:46:02.929: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.44187ms
Jan  8 21:46:08.836: INFO: Number of nodes with available pods: 0
Jan  8 21:46:08.836: INFO: Number of running nodes: 0, number of available pods: 0
Jan  8 21:46:08.840: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3090/daemonsets","resourceVersion":"892153"},"items":null}

Jan  8 21:46:08.844: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3090/pods","resourceVersion":"892153"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:46:08.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3090" for this suite.

• [SLOW TEST:45.691 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":119,"skipped":1910,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:46:08.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-6378
STEP: creating replication controller nodeport-test in namespace services-6378
I0108 21:46:09.020103       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-6378, replica count: 2
I0108 21:46:12.071899       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 21:46:15.072368       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 21:46:18.072781       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  8 21:46:18.072: INFO: Creating new exec pod
Jan  8 21:46:25.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6378 execpod48xvf -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan  8 21:46:25.589: INFO: stderr: "I0108 21:46:25.402180    1535 log.go:172] (0xc0009d8fd0) (0xc0009cc3c0) Create stream\nI0108 21:46:25.402449    1535 log.go:172] (0xc0009d8fd0) (0xc0009cc3c0) Stream added, broadcasting: 1\nI0108 21:46:25.419139    1535 log.go:172] (0xc0009d8fd0) Reply frame received for 1\nI0108 21:46:25.419207    1535 log.go:172] (0xc0009d8fd0) (0xc00067e500) Create stream\nI0108 21:46:25.419225    1535 log.go:172] (0xc0009d8fd0) (0xc00067e500) Stream added, broadcasting: 3\nI0108 21:46:25.422200    1535 log.go:172] (0xc0009d8fd0) Reply frame received for 3\nI0108 21:46:25.422230    1535 log.go:172] (0xc0009d8fd0) (0xc0005212c0) Create stream\nI0108 21:46:25.422241    1535 log.go:172] (0xc0009d8fd0) (0xc0005212c0) Stream added, broadcasting: 5\nI0108 21:46:25.425953    1535 log.go:172] (0xc0009d8fd0) Reply frame received for 5\nI0108 21:46:25.507075    1535 log.go:172] (0xc0009d8fd0) Data frame received for 5\nI0108 21:46:25.507198    1535 log.go:172] (0xc0005212c0) (5) Data frame handling\nI0108 21:46:25.507226    1535 log.go:172] (0xc0005212c0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0108 21:46:25.511246    1535 log.go:172] (0xc0009d8fd0) Data frame received for 5\nI0108 21:46:25.511293    1535 log.go:172] (0xc0005212c0) (5) Data frame handling\nI0108 21:46:25.511313    1535 log.go:172] (0xc0005212c0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0108 21:46:25.577664    1535 log.go:172] (0xc0009d8fd0) Data frame received for 1\nI0108 21:46:25.577762    1535 log.go:172] (0xc0009cc3c0) (1) Data frame handling\nI0108 21:46:25.577861    1535 log.go:172] (0xc0009cc3c0) (1) Data frame sent\nI0108 21:46:25.577930    1535 log.go:172] (0xc0009d8fd0) (0xc0009cc3c0) Stream removed, broadcasting: 1\nI0108 21:46:25.578085    1535 log.go:172] (0xc0009d8fd0) (0xc00067e500) Stream removed, broadcasting: 3\nI0108 21:46:25.578266    1535 log.go:172] (0xc0009d8fd0) (0xc0005212c0) Stream removed, broadcasting: 5\nI0108 21:46:25.578455    1535 log.go:172] (0xc0009d8fd0) Go away received\nI0108 21:46:25.579162    1535 log.go:172] (0xc0009d8fd0) (0xc0009cc3c0) Stream removed, broadcasting: 1\nI0108 21:46:25.579179    1535 log.go:172] (0xc0009d8fd0) (0xc00067e500) Stream removed, broadcasting: 3\nI0108 21:46:25.579186    1535 log.go:172] (0xc0009d8fd0) (0xc0005212c0) Stream removed, broadcasting: 5\n"
Jan  8 21:46:25.589: INFO: stdout: ""
Jan  8 21:46:25.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6378 execpod48xvf -- /bin/sh -x -c nc -zv -t -w 2 10.96.58.141 80'
Jan  8 21:46:26.057: INFO: stderr: "I0108 21:46:25.833229    1556 log.go:172] (0xc000aac000) (0xc00061a6e0) Create stream\nI0108 21:46:25.833548    1556 log.go:172] (0xc000aac000) (0xc00061a6e0) Stream added, broadcasting: 1\nI0108 21:46:25.838071    1556 log.go:172] (0xc000aac000) Reply frame received for 1\nI0108 21:46:25.838102    1556 log.go:172] (0xc000aac000) (0xc000643ae0) Create stream\nI0108 21:46:25.838109    1556 log.go:172] (0xc000aac000) (0xc000643ae0) Stream added, broadcasting: 3\nI0108 21:46:25.840974    1556 log.go:172] (0xc000aac000) Reply frame received for 3\nI0108 21:46:25.840998    1556 log.go:172] (0xc000aac000) (0xc000643cc0) Create stream\nI0108 21:46:25.841003    1556 log.go:172] (0xc000aac000) (0xc000643cc0) Stream added, broadcasting: 5\nI0108 21:46:25.842358    1556 log.go:172] (0xc000aac000) Reply frame received for 5\nI0108 21:46:25.937793    1556 log.go:172] (0xc000aac000) Data frame received for 5\nI0108 21:46:25.938231    1556 log.go:172] (0xc000643cc0) (5) Data frame handling\nI0108 21:46:25.938397    1556 log.go:172] (0xc000643cc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.58.141 80\nI0108 21:46:25.942057    1556 log.go:172] (0xc000aac000) Data frame received for 5\nI0108 21:46:25.942086    1556 log.go:172] (0xc000643cc0) (5) Data frame handling\nI0108 21:46:25.942102    1556 log.go:172] (0xc000643cc0) (5) Data frame sent\nConnection to 10.96.58.141 80 port [tcp/http] succeeded!\nI0108 21:46:26.044591    1556 log.go:172] (0xc000aac000) Data frame received for 1\nI0108 21:46:26.044739    1556 log.go:172] (0xc000aac000) (0xc000643ae0) Stream removed, broadcasting: 3\nI0108 21:46:26.044841    1556 log.go:172] (0xc00061a6e0) (1) Data frame handling\nI0108 21:46:26.044883    1556 log.go:172] (0xc00061a6e0) (1) Data frame sent\nI0108 21:46:26.044951    1556 log.go:172] (0xc000aac000) (0xc000643cc0) Stream removed, broadcasting: 5\nI0108 21:46:26.045040    1556 log.go:172] (0xc000aac000) (0xc00061a6e0) Stream removed, broadcasting: 1\nI0108 21:46:26.045072    1556 log.go:172] (0xc000aac000) Go away received\nI0108 21:46:26.046248    1556 log.go:172] (0xc000aac000) (0xc00061a6e0) Stream removed, broadcasting: 1\nI0108 21:46:26.046261    1556 log.go:172] (0xc000aac000) (0xc000643ae0) Stream removed, broadcasting: 3\nI0108 21:46:26.046268    1556 log.go:172] (0xc000aac000) (0xc000643cc0) Stream removed, broadcasting: 5\n"
Jan  8 21:46:26.057: INFO: stdout: ""
Jan  8 21:46:26.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6378 execpod48xvf -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32050'
Jan  8 21:46:26.346: INFO: stderr: "I0108 21:46:26.178380    1576 log.go:172] (0xc00010d600) (0xc00067fae0) Create stream\nI0108 21:46:26.178645    1576 log.go:172] (0xc00010d600) (0xc00067fae0) Stream added, broadcasting: 1\nI0108 21:46:26.182219    1576 log.go:172] (0xc00010d600) Reply frame received for 1\nI0108 21:46:26.182299    1576 log.go:172] (0xc00010d600) (0xc0005a6000) Create stream\nI0108 21:46:26.182323    1576 log.go:172] (0xc00010d600) (0xc0005a6000) Stream added, broadcasting: 3\nI0108 21:46:26.183756    1576 log.go:172] (0xc00010d600) Reply frame received for 3\nI0108 21:46:26.183781    1576 log.go:172] (0xc00010d600) (0xc0005a6140) Create stream\nI0108 21:46:26.183790    1576 log.go:172] (0xc00010d600) (0xc0005a6140) Stream added, broadcasting: 5\nI0108 21:46:26.184861    1576 log.go:172] (0xc00010d600) Reply frame received for 5\nI0108 21:46:26.253228    1576 log.go:172] (0xc00010d600) Data frame received for 5\nI0108 21:46:26.253341    1576 log.go:172] (0xc0005a6140) (5) Data frame handling\nI0108 21:46:26.253359    1576 log.go:172] (0xc0005a6140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32050\nI0108 21:46:26.254674    1576 log.go:172] (0xc00010d600) Data frame received for 5\nI0108 21:46:26.254688    1576 log.go:172] (0xc0005a6140) (5) Data frame handling\nI0108 21:46:26.254700    1576 log.go:172] (0xc0005a6140) (5) Data frame sent\nConnection to 10.96.2.250 32050 port [tcp/32050] succeeded!\nI0108 21:46:26.330521    1576 log.go:172] (0xc00010d600) Data frame received for 1\nI0108 21:46:26.330861    1576 log.go:172] (0xc00067fae0) (1) Data frame handling\nI0108 21:46:26.330934    1576 log.go:172] (0xc00067fae0) (1) Data frame sent\nI0108 21:46:26.331021    1576 log.go:172] (0xc00010d600) (0xc00067fae0) Stream removed, broadcasting: 1\nI0108 21:46:26.332137    1576 log.go:172] (0xc00010d600) (0xc0005a6000) Stream removed, broadcasting: 3\nI0108 21:46:26.332308    1576 log.go:172] (0xc00010d600) (0xc0005a6140) Stream removed, broadcasting: 5\nI0108 21:46:26.332438    1576 log.go:172] (0xc00010d600) (0xc00067fae0) Stream removed, broadcasting: 1\nI0108 21:46:26.332503    1576 log.go:172] (0xc00010d600) (0xc0005a6000) Stream removed, broadcasting: 3\nI0108 21:46:26.332603    1576 log.go:172] (0xc00010d600) (0xc0005a6140) Stream removed, broadcasting: 5\nI0108 21:46:26.333228    1576 log.go:172] (0xc00010d600) Go away received\n"
Jan  8 21:46:26.347: INFO: stdout: ""
Jan  8 21:46:26.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6378 execpod48xvf -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32050'
Jan  8 21:46:26.862: INFO: stderr: "I0108 21:46:26.516919    1597 log.go:172] (0xc0005f0790) (0xc0005ea000) Create stream\nI0108 21:46:26.517409    1597 log.go:172] (0xc0005f0790) (0xc0005ea000) Stream added, broadcasting: 1\nI0108 21:46:26.529506    1597 log.go:172] (0xc0005f0790) Reply frame received for 1\nI0108 21:46:26.529706    1597 log.go:172] (0xc0005f0790) (0xc00060bae0) Create stream\nI0108 21:46:26.529739    1597 log.go:172] (0xc0005f0790) (0xc00060bae0) Stream added, broadcasting: 3\nI0108 21:46:26.534662    1597 log.go:172] (0xc0005f0790) Reply frame received for 3\nI0108 21:46:26.535120    1597 log.go:172] (0xc0005f0790) (0xc0001b2000) Create stream\nI0108 21:46:26.535167    1597 log.go:172] (0xc0005f0790) (0xc0001b2000) Stream added, broadcasting: 5\nI0108 21:46:26.537557    1597 log.go:172] (0xc0005f0790) Reply frame received for 5\nI0108 21:46:26.658094    1597 log.go:172] (0xc0005f0790) Data frame received for 5\nI0108 21:46:26.658292    1597 log.go:172] (0xc0001b2000) (5) Data frame handling\nI0108 21:46:26.658387    1597 log.go:172] (0xc0001b2000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32050\nI0108 21:46:26.658803    1597 log.go:172] (0xc0005f0790) Data frame received for 5\nI0108 21:46:26.658824    1597 log.go:172] (0xc0001b2000) (5) Data frame handling\nI0108 21:46:26.658838    1597 log.go:172] (0xc0001b2000) (5) Data frame sent\nConnection to 10.96.1.234 32050 port [tcp/32050] succeeded!\nI0108 21:46:26.828875    1597 log.go:172] (0xc0005f0790) (0xc00060bae0) Stream removed, broadcasting: 3\nI0108 21:46:26.829941    1597 log.go:172] (0xc0005f0790) Data frame received for 1\nI0108 21:46:26.829980    1597 log.go:172] (0xc0005ea000) (1) Data frame handling\nI0108 21:46:26.830028    1597 log.go:172] (0xc0005ea000) (1) Data frame sent\nI0108 21:46:26.830055    1597 log.go:172] (0xc0005f0790) (0xc0005ea000) Stream removed, broadcasting: 1\nI0108 21:46:26.831592    1597 log.go:172] (0xc0005f0790) (0xc0001b2000) Stream removed, broadcasting: 5\nI0108 21:46:26.832120    1597 log.go:172] (0xc0005f0790) Go away received\nI0108 21:46:26.832301    1597 log.go:172] (0xc0005f0790) (0xc0005ea000) Stream removed, broadcasting: 1\nI0108 21:46:26.832387    1597 log.go:172] (0xc0005f0790) (0xc00060bae0) Stream removed, broadcasting: 3\nI0108 21:46:26.832792    1597 log.go:172] (0xc0005f0790) (0xc0001b2000) Stream removed, broadcasting: 5\n"
Jan  8 21:46:26.863: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:46:26.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6378" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:18.047 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":120,"skipped":1957,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:46:26.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8702
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  8 21:46:27.028: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  8 21:47:01.234: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.3 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8702 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 21:47:01.234: INFO: >>> kubeConfig: /root/.kube/config
I0108 21:47:01.299347       9 log.go:172] (0xc0029328f0) (0xc000f27860) Create stream
I0108 21:47:01.299405       9 log.go:172] (0xc0029328f0) (0xc000f27860) Stream added, broadcasting: 1
I0108 21:47:01.304383       9 log.go:172] (0xc0029328f0) Reply frame received for 1
I0108 21:47:01.304460       9 log.go:172] (0xc0029328f0) (0xc0012c7220) Create stream
I0108 21:47:01.304506       9 log.go:172] (0xc0029328f0) (0xc0012c7220) Stream added, broadcasting: 3
I0108 21:47:01.309541       9 log.go:172] (0xc0029328f0) Reply frame received for 3
I0108 21:47:01.309581       9 log.go:172] (0xc0029328f0) (0xc0012c72c0) Create stream
I0108 21:47:01.309606       9 log.go:172] (0xc0029328f0) (0xc0012c72c0) Stream added, broadcasting: 5
I0108 21:47:01.312321       9 log.go:172] (0xc0029328f0) Reply frame received for 5
I0108 21:47:02.398860       9 log.go:172] (0xc0029328f0) Data frame received for 3
I0108 21:47:02.398987       9 log.go:172] (0xc0012c7220) (3) Data frame handling
I0108 21:47:02.399027       9 log.go:172] (0xc0012c7220) (3) Data frame sent
I0108 21:47:02.538577       9 log.go:172] (0xc0029328f0) Data frame received for 1
I0108 21:47:02.538810       9 log.go:172] (0xc0029328f0) (0xc0012c72c0) Stream removed, broadcasting: 5
I0108 21:47:02.538919       9 log.go:172] (0xc000f27860) (1) Data frame handling
I0108 21:47:02.538972       9 log.go:172] (0xc000f27860) (1) Data frame sent
I0108 21:47:02.539284       9 log.go:172] (0xc0029328f0) (0xc0012c7220) Stream removed, broadcasting: 3
I0108 21:47:02.539412       9 log.go:172] (0xc0029328f0) (0xc000f27860) Stream removed, broadcasting: 1
I0108 21:47:02.539460       9 log.go:172] (0xc0029328f0) Go away received
I0108 21:47:02.539990       9 log.go:172] (0xc0029328f0) (0xc000f27860) Stream removed, broadcasting: 1
I0108 21:47:02.540016       9 log.go:172] (0xc0029328f0) (0xc0012c7220) Stream removed, broadcasting: 3
I0108 21:47:02.540029       9 log.go:172] (0xc0029328f0) (0xc0012c72c0) Stream removed, broadcasting: 5
Jan  8 21:47:02.540: INFO: Found all expected endpoints: [netserver-0]
Jan  8 21:47:02.595: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.5 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8702 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 21:47:02.595: INFO: >>> kubeConfig: /root/.kube/config
I0108 21:47:02.636112       9 log.go:172] (0xc002a962c0) (0xc00172db80) Create stream
I0108 21:47:02.636189       9 log.go:172] (0xc002a962c0) (0xc00172db80) Stream added, broadcasting: 1
I0108 21:47:02.639851       9 log.go:172] (0xc002a962c0) Reply frame received for 1
I0108 21:47:02.639937       9 log.go:172] (0xc002a962c0) (0xc0012c7540) Create stream
I0108 21:47:02.639954       9 log.go:172] (0xc002a962c0) (0xc0012c7540) Stream added, broadcasting: 3
I0108 21:47:02.641133       9 log.go:172] (0xc002a962c0) Reply frame received for 3
I0108 21:47:02.641162       9 log.go:172] (0xc002a962c0) (0xc00172dcc0) Create stream
I0108 21:47:02.641173       9 log.go:172] (0xc002a962c0) (0xc00172dcc0) Stream added, broadcasting: 5
I0108 21:47:02.642677       9 log.go:172] (0xc002a962c0) Reply frame received for 5
I0108 21:47:03.703301       9 log.go:172] (0xc002a962c0) Data frame received for 3
I0108 21:47:03.703361       9 log.go:172] (0xc0012c7540) (3) Data frame handling
I0108 21:47:03.703399       9 log.go:172] (0xc0012c7540) (3) Data frame sent
I0108 21:47:03.846520       9 log.go:172] (0xc002a962c0) Data frame received for 1
I0108 21:47:03.846677       9 log.go:172] (0xc00172db80) (1) Data frame handling
I0108 21:47:03.846775       9 log.go:172] (0xc00172db80) (1) Data frame sent
I0108 21:47:03.847094       9 log.go:172] (0xc002a962c0) (0xc00172dcc0) Stream removed, broadcasting: 5
I0108 21:47:03.847255       9 log.go:172] (0xc002a962c0) (0xc00172db80) Stream removed, broadcasting: 1
I0108 21:47:03.847479       9 log.go:172] (0xc002a962c0) (0xc0012c7540) Stream removed, broadcasting: 3
I0108 21:47:03.847617       9 log.go:172] (0xc002a962c0) Go away received
I0108 21:47:03.847787       9 log.go:172] (0xc002a962c0) (0xc00172db80) Stream removed, broadcasting: 1
I0108 21:47:03.847838       9 log.go:172] (0xc002a962c0) (0xc0012c7540) Stream removed, broadcasting: 3
I0108 21:47:03.847864       9 log.go:172] (0xc002a962c0) (0xc00172dcc0) Stream removed, broadcasting: 5
Jan  8 21:47:03.848: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:47:03.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8702" for this suite.

• [SLOW TEST:36.945 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1965,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:47:03.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan  8 21:47:03.965: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  8 21:47:03.982: INFO: Waiting for terminating namespaces to be deleted...
Jan  8 21:47:03.985: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan  8 21:47:04.000: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.001: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 21:47:04.001: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan  8 21:47:04.001: INFO: 	Container weave ready: true, restart count 1
Jan  8 21:47:04.001: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 21:47:04.001: INFO: test-container-pod from pod-network-test-8702 started at 2020-01-08 21:46:53 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.001: INFO: 	Container webserver ready: true, restart count 0
Jan  8 21:47:04.001: INFO: host-test-container-pod from pod-network-test-8702 started at 2020-01-08 21:46:53 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.001: INFO: 	Container agnhost ready: true, restart count 0
Jan  8 21:47:04.001: INFO: netserver-0 from pod-network-test-8702 started at 2020-01-08 21:46:27 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.001: INFO: 	Container webserver ready: true, restart count 0
Jan  8 21:47:04.001: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan  8 21:47:04.016: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.016: INFO: 	Container coredns ready: true, restart count 0
Jan  8 21:47:04.016: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.016: INFO: 	Container coredns ready: true, restart count 0
Jan  8 21:47:04.016: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.016: INFO: 	Container kube-controller-manager ready: true, restart count 1
Jan  8 21:47:04.016: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.016: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 21:47:04.016: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan  8 21:47:04.016: INFO: 	Container weave ready: true, restart count 0
Jan  8 21:47:04.016: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 21:47:04.016: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.016: INFO: 	Container kube-scheduler ready: true, restart count 2
Jan  8 21:47:04.016: INFO: netserver-1 from pod-network-test-8702 started at 2020-01-08 21:46:27 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.016: INFO: 	Container webserver ready: true, restart count 0
Jan  8 21:47:04.016: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.016: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan  8 21:47:04.016: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  8 21:47:04.016: INFO: 	Container etcd ready: true, restart count 1
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e8079937f467da], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:47:05.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5194" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":122,"skipped":1996,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:47:05.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-394b493e-9c63-4c4b-955d-96f0fec89836
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-394b493e-9c63-4c4b-955d-96f0fec89836
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:47:19.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4894" for this suite.

• [SLOW TEST:14.399 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2010,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:47:19.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Jan  8 21:47:19.625: INFO: Waiting up to 5m0s for pod "client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17" in namespace "containers-1299" to be "success or failure"
Jan  8 21:47:19.629: INFO: Pod "client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17": Phase="Pending", Reason="", readiness=false. Elapsed: 3.965676ms
Jan  8 21:47:21.704: INFO: Pod "client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078924382s
Jan  8 21:47:23.784: INFO: Pod "client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158854626s
Jan  8 21:47:25.807: INFO: Pod "client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181676768s
Jan  8 21:47:27.878: INFO: Pod "client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252180499s
Jan  8 21:47:29.884: INFO: Pod "client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17": Phase="Pending", Reason="", readiness=false. Elapsed: 10.258880109s
Jan  8 21:47:31.892: INFO: Pod "client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.266852568s
STEP: Saw pod success
Jan  8 21:47:31.892: INFO: Pod "client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17" satisfied condition "success or failure"
Jan  8 21:47:31.896: INFO: Trying to get logs from node jerma-node pod client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17 container test-container: 
STEP: delete the pod
Jan  8 21:47:31.952: INFO: Waiting for pod client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17 to disappear
Jan  8 21:47:31.963: INFO: Pod client-containers-118d97a0-02b1-43fa-a2da-029f653a2f17 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:47:31.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1299" for this suite.

• [SLOW TEST:12.490 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2040,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:47:31.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 21:47:32.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960" in namespace "downward-api-9471" to be "success or failure"
Jan  8 21:47:32.237: INFO: Pod "downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960": Phase="Pending", Reason="", readiness=false. Elapsed: 9.952154ms
Jan  8 21:47:34.247: INFO: Pod "downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020438435s
Jan  8 21:47:36.259: INFO: Pod "downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031697323s
Jan  8 21:47:38.269: INFO: Pod "downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042413716s
Jan  8 21:47:40.276: INFO: Pod "downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049007072s
STEP: Saw pod success
Jan  8 21:47:40.276: INFO: Pod "downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960" satisfied condition "success or failure"
Jan  8 21:47:40.278: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960 container client-container: 
STEP: delete the pod
Jan  8 21:47:40.317: INFO: Waiting for pod downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960 to disappear
Jan  8 21:47:40.419: INFO: Pod downwardapi-volume-5da3c96f-6476-4441-8d0c-4cbf9d45e960 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:47:40.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9471" for this suite.

• [SLOW TEST:8.461 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2049,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:47:40.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-17a9a9ab-4cef-403e-a5c0-c1546649bd53
STEP: Creating a pod to test consume secrets
Jan  8 21:47:40.841: INFO: Waiting up to 5m0s for pod "pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703" in namespace "secrets-4857" to be "success or failure"
Jan  8 21:47:40.854: INFO: Pod "pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703": Phase="Pending", Reason="", readiness=false. Elapsed: 12.39696ms
Jan  8 21:47:42.864: INFO: Pod "pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022582032s
Jan  8 21:47:44.874: INFO: Pod "pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032616866s
Jan  8 21:47:46.918: INFO: Pod "pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076520784s
Jan  8 21:47:48.924: INFO: Pod "pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08252502s
STEP: Saw pod success
Jan  8 21:47:48.924: INFO: Pod "pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703" satisfied condition "success or failure"
Jan  8 21:47:48.926: INFO: Trying to get logs from node jerma-node pod pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703 container secret-volume-test: 
STEP: delete the pod
Jan  8 21:47:49.068: INFO: Waiting for pod pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703 to disappear
Jan  8 21:47:49.080: INFO: Pod pod-secrets-f22999e7-a5e0-41cf-95dc-ac3d7a902703 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:47:49.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4857" for this suite.
STEP: Destroying namespace "secret-namespace-7002" for this suite.

• [SLOW TEST:8.673 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2052,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:47:49.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  8 21:48:03.318: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 21:48:03.333: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 21:48:05.334: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 21:48:05.344: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 21:48:07.333: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 21:48:07.366: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 21:48:09.334: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 21:48:09.340: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 21:48:11.334: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 21:48:11.338: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 21:48:13.334: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 21:48:13.343: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:48:13.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7575" for this suite.

• [SLOW TEST:24.250 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2069,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:48:13.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-a8405dad-a788-4a71-a6e5-d0d3ac4028eb
STEP: Creating secret with name secret-projected-all-test-volume-3e6b2664-4cd3-4ec2-9560-1925fcfb3632
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  8 21:48:13.492: INFO: Waiting up to 5m0s for pod "projected-volume-089c71be-0707-421d-a248-89663a517e3f" in namespace "projected-5366" to be "success or failure"
Jan  8 21:48:13.499: INFO: Pod "projected-volume-089c71be-0707-421d-a248-89663a517e3f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.61435ms
Jan  8 21:48:15.512: INFO: Pod "projected-volume-089c71be-0707-421d-a248-89663a517e3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020163076s
Jan  8 21:48:17.520: INFO: Pod "projected-volume-089c71be-0707-421d-a248-89663a517e3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028111824s
Jan  8 21:48:19.663: INFO: Pod "projected-volume-089c71be-0707-421d-a248-89663a517e3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171239733s
Jan  8 21:48:21.671: INFO: Pod "projected-volume-089c71be-0707-421d-a248-89663a517e3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.179351734s
STEP: Saw pod success
Jan  8 21:48:21.671: INFO: Pod "projected-volume-089c71be-0707-421d-a248-89663a517e3f" satisfied condition "success or failure"
Jan  8 21:48:21.676: INFO: Trying to get logs from node jerma-node pod projected-volume-089c71be-0707-421d-a248-89663a517e3f container projected-all-volume-test: 
STEP: delete the pod
Jan  8 21:48:21.853: INFO: Waiting for pod projected-volume-089c71be-0707-421d-a248-89663a517e3f to disappear
Jan  8 21:48:21.863: INFO: Pod projected-volume-089c71be-0707-421d-a248-89663a517e3f no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:48:21.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5366" for this suite.

• [SLOW TEST:8.518 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2117,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:48:21.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  8 21:48:22.018: INFO: Waiting up to 5m0s for pod "pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb" in namespace "emptydir-1773" to be "success or failure"
Jan  8 21:48:22.033: INFO: Pod "pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.511429ms
Jan  8 21:48:24.042: INFO: Pod "pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024269808s
Jan  8 21:48:26.049: INFO: Pod "pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030845907s
Jan  8 21:48:28.055: INFO: Pod "pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036868981s
Jan  8 21:48:30.065: INFO: Pod "pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046852681s
STEP: Saw pod success
Jan  8 21:48:30.065: INFO: Pod "pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb" satisfied condition "success or failure"
Jan  8 21:48:30.068: INFO: Trying to get logs from node jerma-node pod pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb container test-container: 
STEP: delete the pod
Jan  8 21:48:30.096: INFO: Waiting for pod pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb to disappear
Jan  8 21:48:30.107: INFO: Pod pod-0de1a8f3-c1f7-4514-a030-1134fcfbf9fb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:48:30.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1773" for this suite.

• [SLOW TEST:8.236 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2123,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:48:30.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan  8 21:48:30.232: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:48:42.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-418" for this suite.

• [SLOW TEST:12.270 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":130,"skipped":2140,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:48:42.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:48:42.477: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:48:47.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6947" for this suite.

• [SLOW TEST:5.625 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":131,"skipped":2142,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:48:48.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:48:55.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4184" for this suite.

• [SLOW TEST:7.256 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":132,"skipped":2157,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:48:55.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-dfe94c3b-9423-4570-8b63-3f457451f602
STEP: Creating a pod to test consume configMaps
Jan  8 21:48:55.457: INFO: Waiting up to 5m0s for pod "pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00" in namespace "configmap-7256" to be "success or failure"
Jan  8 21:48:55.469: INFO: Pod "pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00": Phase="Pending", Reason="", readiness=false. Elapsed: 11.420682ms
Jan  8 21:48:57.480: INFO: Pod "pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022541244s
Jan  8 21:48:59.486: INFO: Pod "pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029102304s
Jan  8 21:49:01.493: INFO: Pod "pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035495842s
Jan  8 21:49:03.501: INFO: Pod "pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043304869s
Jan  8 21:49:05.507: INFO: Pod "pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049543772s
STEP: Saw pod success
Jan  8 21:49:05.507: INFO: Pod "pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00" satisfied condition "success or failure"
Jan  8 21:49:05.511: INFO: Trying to get logs from node jerma-node pod pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00 container configmap-volume-test: 
STEP: delete the pod
Jan  8 21:49:05.573: INFO: Waiting for pod pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00 to disappear
Jan  8 21:49:05.583: INFO: Pod pod-configmaps-79af47ef-6859-4371-864b-7b7ccbc1be00 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:49:05.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7256" for this suite.

• [SLOW TEST:10.327 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2173,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:49:05.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:49:12.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8824" for this suite.
STEP: Destroying namespace "nsdeletetest-9683" for this suite.
Jan  8 21:49:12.203: INFO: Namespace nsdeletetest-9683 was already deleted
STEP: Destroying namespace "nsdeletetest-6944" for this suite.

• [SLOW TEST:6.622 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":134,"skipped":2181,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:49:12.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  8 21:49:12.381: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7318 /api/v1/namespaces/watch-7318/configmaps/e2e-watch-test-resource-version 42fe468a-fdf2-4756-bfa0-8be4d2f411c4 893142 0 2020-01-08 21:49:12 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  8 21:49:12.381: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7318 /api/v1/namespaces/watch-7318/configmaps/e2e-watch-test-resource-version 42fe468a-fdf2-4756-bfa0-8be4d2f411c4 893144 0 2020-01-08 21:49:12 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:49:12.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7318" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":135,"skipped":2194,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:49:12.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan  8 21:49:12.523: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  8 21:49:12.540: INFO: Waiting for terminating namespaces to be deleted...
Jan  8 21:49:12.542: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan  8 21:49:12.607: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan  8 21:49:12.607: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 21:49:12.607: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan  8 21:49:12.607: INFO: 	Container weave ready: true, restart count 1
Jan  8 21:49:12.607: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 21:49:12.607: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan  8 21:49:12.629: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  8 21:49:12.629: INFO: 	Container kube-scheduler ready: true, restart count 2
Jan  8 21:49:12.629: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  8 21:49:12.629: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan  8 21:49:12.629: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  8 21:49:12.629: INFO: 	Container etcd ready: true, restart count 1
Jan  8 21:49:12.629: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  8 21:49:12.629: INFO: 	Container coredns ready: true, restart count 0
Jan  8 21:49:12.629: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  8 21:49:12.629: INFO: 	Container coredns ready: true, restart count 0
Jan  8 21:49:12.629: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  8 21:49:12.629: INFO: 	Container kube-controller-manager ready: true, restart count 1
Jan  8 21:49:12.629: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan  8 21:49:12.629: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 21:49:12.629: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan  8 21:49:12.629: INFO: 	Container weave ready: true, restart count 0
Jan  8 21:49:12.629: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-fff583c7-0ae7-46fb-a7b2-fec61a226526 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-fff583c7-0ae7-46fb-a7b2-fec61a226526 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-fff583c7-0ae7-46fb-a7b2-fec61a226526
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:54:29.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1989" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:316.685 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":136,"skipped":2208,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:54:29.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-d2f5c5cc-c473-4bf9-b769-27e440fa02cb
STEP: Creating a pod to test consume secrets
Jan  8 21:54:29.188: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8" in namespace "projected-9315" to be "success or failure"
Jan  8 21:54:29.197: INFO: Pod "pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953814ms
Jan  8 21:54:31.206: INFO: Pod "pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017921079s
Jan  8 21:54:33.221: INFO: Pod "pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032715969s
Jan  8 21:54:35.228: INFO: Pod "pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039858121s
Jan  8 21:54:37.248: INFO: Pod "pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059431791s
STEP: Saw pod success
Jan  8 21:54:37.248: INFO: Pod "pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8" satisfied condition "success or failure"
Jan  8 21:54:37.268: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8 container projected-secret-volume-test: 
STEP: delete the pod
Jan  8 21:54:37.370: INFO: Waiting for pod pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8 to disappear
Jan  8 21:54:37.376: INFO: Pod pod-projected-secrets-abbf0af2-3464-46e9-a6a8-12daf86564b8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:54:37.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9315" for this suite.

• [SLOW TEST:8.317 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2210,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:54:37.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 21:54:38.250: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 21:54:40.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:54:42.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:54:44.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117278, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 21:54:47.335: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:54:47.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5651-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:54:48.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1921" for this suite.
STEP: Destroying namespace "webhook-1921-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.054 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":138,"skipped":2237,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:54:48.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Jan  8 21:54:56.619: INFO: Pod pod-hostip-2cd1ac04-069d-467c-abbb-bb5e85a7bbd4 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:54:56.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1158" for this suite.

• [SLOW TEST:8.177 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2238,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:54:56.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 21:54:57.334: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 21:54:59.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:55:01.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:55:03.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:55:05.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117297, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 21:55:08.395: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:55:08.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3127" for this suite.
STEP: Destroying namespace "webhook-3127-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.967 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":140,"skipped":2245,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:55:08.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:55:08.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan  8 21:55:11.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3059 create -f -'
Jan  8 21:55:14.375: INFO: stderr: ""
Jan  8 21:55:14.375: INFO: stdout: "e2e-test-crd-publish-openapi-5369-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan  8 21:55:14.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3059 delete e2e-test-crd-publish-openapi-5369-crds test-cr'
Jan  8 21:55:14.639: INFO: stderr: ""
Jan  8 21:55:14.639: INFO: stdout: "e2e-test-crd-publish-openapi-5369-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jan  8 21:55:14.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3059 apply -f -'
Jan  8 21:55:14.976: INFO: stderr: ""
Jan  8 21:55:14.976: INFO: stdout: "e2e-test-crd-publish-openapi-5369-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan  8 21:55:14.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3059 delete e2e-test-crd-publish-openapi-5369-crds test-cr'
Jan  8 21:55:15.120: INFO: stderr: ""
Jan  8 21:55:15.120: INFO: stdout: "e2e-test-crd-publish-openapi-5369-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan  8 21:55:15.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5369-crds'
Jan  8 21:55:15.445: INFO: stderr: ""
Jan  8 21:55:15.445: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5369-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:55:18.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3059" for this suite.

• [SLOW TEST:10.333 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":141,"skipped":2246,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:55:18.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0108 21:55:30.341814       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 21:55:30.341: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:55:30.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2550" for this suite.

• [SLOW TEST:11.433 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":142,"skipped":2252,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:55:30.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-9888c806-e5a6-4889-9a10-953f0b2757ad
STEP: Creating a pod to test consume secrets
Jan  8 21:55:35.166: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2" in namespace "projected-5338" to be "success or failure"
Jan  8 21:55:35.634: INFO: Pod "pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2": Phase="Pending", Reason="", readiness=false. Elapsed: 467.766645ms
Jan  8 21:55:37.948: INFO: Pod "pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.781831209s
Jan  8 21:55:40.857: INFO: Pod "pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.690595339s
Jan  8 21:55:42.992: INFO: Pod "pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.825879054s
Jan  8 21:55:45.077: INFO: Pod "pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.910600405s
Jan  8 21:55:47.086: INFO: Pod "pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.919948722s
Jan  8 21:55:49.149: INFO: Pod "pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.983412622s
STEP: Saw pod success
Jan  8 21:55:49.150: INFO: Pod "pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2" satisfied condition "success or failure"
Jan  8 21:55:49.156: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2 container secret-volume-test: 
STEP: delete the pod
Jan  8 21:55:49.345: INFO: Waiting for pod pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2 to disappear
Jan  8 21:55:49.352: INFO: Pod pod-projected-secrets-954dea54-5ceb-4580-9066-a2fc673b58a2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:55:49.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5338" for this suite.

• [SLOW TEST:18.995 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2257,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:55:49.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 21:55:49.973: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 21:55:51.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117350, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:55:53.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117350, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:55:55.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117349, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117349, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117350, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117349, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 21:55:59.012: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:56:09.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2592" for this suite.
STEP: Destroying namespace "webhook-2592-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.047 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":144,"skipped":2281,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:56:09.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-a04331f7-4111-41dc-9a80-48727955f6cb
STEP: Creating a pod to test consume secrets
Jan  8 21:56:09.503: INFO: Waiting up to 5m0s for pod "pod-secrets-f0c529cd-705f-412e-81a7-04507024443d" in namespace "secrets-1313" to be "success or failure"
Jan  8 21:56:09.512: INFO: Pod "pod-secrets-f0c529cd-705f-412e-81a7-04507024443d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.466015ms
Jan  8 21:56:11.557: INFO: Pod "pod-secrets-f0c529cd-705f-412e-81a7-04507024443d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053622575s
Jan  8 21:56:13.569: INFO: Pod "pod-secrets-f0c529cd-705f-412e-81a7-04507024443d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064962643s
Jan  8 21:56:15.603: INFO: Pod "pod-secrets-f0c529cd-705f-412e-81a7-04507024443d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099778728s
Jan  8 21:56:17.609: INFO: Pod "pod-secrets-f0c529cd-705f-412e-81a7-04507024443d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105310091s
Jan  8 21:56:19.638: INFO: Pod "pod-secrets-f0c529cd-705f-412e-81a7-04507024443d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134455044s
STEP: Saw pod success
Jan  8 21:56:19.638: INFO: Pod "pod-secrets-f0c529cd-705f-412e-81a7-04507024443d" satisfied condition "success or failure"
Jan  8 21:56:19.642: INFO: Trying to get logs from node jerma-node pod pod-secrets-f0c529cd-705f-412e-81a7-04507024443d container secret-volume-test: 
STEP: delete the pod
Jan  8 21:56:19.694: INFO: Waiting for pod pod-secrets-f0c529cd-705f-412e-81a7-04507024443d to disappear
Jan  8 21:56:19.697: INFO: Pod pod-secrets-f0c529cd-705f-412e-81a7-04507024443d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:56:19.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1313" for this suite.

• [SLOW TEST:10.294 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2282,"failed":0}
SSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:56:19.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:56:19.841: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c0c102b8-aaa6-473b-b6cf-cb0416e2907c" in namespace "security-context-test-344" to be "success or failure"
Jan  8 21:56:20.467: INFO: Pod "busybox-privileged-false-c0c102b8-aaa6-473b-b6cf-cb0416e2907c": Phase="Pending", Reason="", readiness=false. Elapsed: 625.632982ms
Jan  8 21:56:22.479: INFO: Pod "busybox-privileged-false-c0c102b8-aaa6-473b-b6cf-cb0416e2907c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638196538s
Jan  8 21:56:24.490: INFO: Pod "busybox-privileged-false-c0c102b8-aaa6-473b-b6cf-cb0416e2907c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.648858891s
Jan  8 21:56:26.502: INFO: Pod "busybox-privileged-false-c0c102b8-aaa6-473b-b6cf-cb0416e2907c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.661242729s
Jan  8 21:56:26.502: INFO: Pod "busybox-privileged-false-c0c102b8-aaa6-473b-b6cf-cb0416e2907c" satisfied condition "success or failure"
Jan  8 21:56:26.521: INFO: Got logs for pod "busybox-privileged-false-c0c102b8-aaa6-473b-b6cf-cb0416e2907c": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:56:26.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-344" for this suite.

• [SLOW TEST:6.842 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2285,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:56:26.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:56:26.707: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  8 21:56:26.737: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  8 21:56:31.742: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  8 21:56:33.757: INFO: Creating deployment "test-rolling-update-deployment"
Jan  8 21:56:33.765: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  8 21:56:33.796: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  8 21:56:35.816: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  8 21:56:35.835: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:56:37.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:56:39.853: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan  8 21:56:39.875: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-4246 /apis/apps/v1/namespaces/deployment-4246/deployments/test-rolling-update-deployment da045c7f-6d40-4f32-ada1-6070db0e2bf1 894740 1 2020-01-08 21:56:33 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053e94d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-08 21:56:33 +0000 UTC,LastTransitionTime:2020-01-08 21:56:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-08 21:56:38 +0000 UTC,LastTransitionTime:2020-01-08 21:56:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan  8 21:56:39.881: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-4246 /apis/apps/v1/namespaces/deployment-4246/replicasets/test-rolling-update-deployment-67cf4f6444 9b52ec7e-7ede-4d80-94a4-3d498de127f0 894729 1 2020-01-08 21:56:33 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment da045c7f-6d40-4f32-ada1-6070db0e2bf1 0xc005347767 0xc005347768}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053477d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan  8 21:56:39.881: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  8 21:56:39.881: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-4246 /apis/apps/v1/namespaces/deployment-4246/replicasets/test-rolling-update-controller 1bd952a2-29a7-4b8e-b537-de6acf52189e 894739 2 2020-01-08 21:56:26 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment da045c7f-6d40-4f32-ada1-6070db0e2bf1 0xc005347687 0xc005347688}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0053476f8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan  8 21:56:39.933: INFO: Pod "test-rolling-update-deployment-67cf4f6444-2z9ng" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-2z9ng test-rolling-update-deployment-67cf4f6444- deployment-4246 /api/v1/namespaces/deployment-4246/pods/test-rolling-update-deployment-67cf4f6444-2z9ng aaf45790-ccf8-4f72-85d6-6797815e4999 894728 0 2020-01-08 21:56:33 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 9b52ec7e-7ede-4d80-94a4-3d498de127f0 0xc005347d07 0xc005347d08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w8rcl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w8rcl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w8rcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:56:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 21:56:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-08 21:56:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 21:56:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://b30a7bff412d33ec9e51ae170281df9bab03b2aa6ab25169100d28fd26df1a73,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:56:39.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4246" for this suite.

• [SLOW TEST:13.409 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":147,"skipped":2312,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:56:39.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  8 21:56:49.239: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:56:49.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8534" for this suite.

• [SLOW TEST:9.379 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2339,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:56:49.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-4855/configmap-test-99d8f2f0-ded8-4a1a-ba91-f5b6cef987d4
STEP: Creating a pod to test consume configMaps
Jan  8 21:56:49.424: INFO: Waiting up to 5m0s for pod "pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb" in namespace "configmap-4855" to be "success or failure"
Jan  8 21:56:49.577: INFO: Pod "pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb": Phase="Pending", Reason="", readiness=false. Elapsed: 152.673304ms
Jan  8 21:56:51.888: INFO: Pod "pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463327556s
Jan  8 21:56:53.897: INFO: Pod "pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47227225s
Jan  8 21:56:55.903: INFO: Pod "pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478882682s
Jan  8 21:56:57.910: INFO: Pod "pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.486034237s
STEP: Saw pod success
Jan  8 21:56:57.910: INFO: Pod "pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb" satisfied condition "success or failure"
Jan  8 21:56:57.915: INFO: Trying to get logs from node jerma-node pod pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb container env-test: 
STEP: delete the pod
Jan  8 21:56:58.350: INFO: Waiting for pod pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb to disappear
Jan  8 21:56:58.371: INFO: Pod pod-configmaps-250108ed-950f-46e3-ac72-605a0af9bfcb no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:56:58.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4855" for this suite.

• [SLOW TEST:9.047 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2354,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:56:58.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Jan  8 21:56:58.557: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7004" to be "success or failure"
Jan  8 21:56:58.573: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.744921ms
Jan  8 21:57:00.578: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020592462s
Jan  8 21:57:02.589: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032019195s
Jan  8 21:57:04.628: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070985114s
Jan  8 21:57:06.639: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081755595s
Jan  8 21:57:08.646: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088378541s
STEP: Saw pod success
Jan  8 21:57:08.646: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  8 21:57:08.650: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  8 21:57:08.733: INFO: Waiting for pod pod-host-path-test to disappear
Jan  8 21:57:08.741: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:57:08.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7004" for this suite.

• [SLOW TEST:10.365 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2363,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:57:08.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 21:57:08.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68d9baf7-1e1f-494a-b5cd-59b1752a5fb7" in namespace "downward-api-5903" to be "success or failure"
Jan  8 21:57:08.982: INFO: Pod "downwardapi-volume-68d9baf7-1e1f-494a-b5cd-59b1752a5fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 23.879823ms
Jan  8 21:57:10.988: INFO: Pod "downwardapi-volume-68d9baf7-1e1f-494a-b5cd-59b1752a5fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029704421s
Jan  8 21:57:12.995: INFO: Pod "downwardapi-volume-68d9baf7-1e1f-494a-b5cd-59b1752a5fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036901523s
Jan  8 21:57:15.006: INFO: Pod "downwardapi-volume-68d9baf7-1e1f-494a-b5cd-59b1752a5fb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04761222s
STEP: Saw pod success
Jan  8 21:57:15.006: INFO: Pod "downwardapi-volume-68d9baf7-1e1f-494a-b5cd-59b1752a5fb7" satisfied condition "success or failure"
Jan  8 21:57:15.010: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-68d9baf7-1e1f-494a-b5cd-59b1752a5fb7 container client-container: 
STEP: delete the pod
Jan  8 21:57:15.100: INFO: Waiting for pod downwardapi-volume-68d9baf7-1e1f-494a-b5cd-59b1752a5fb7 to disappear
Jan  8 21:57:15.151: INFO: Pod downwardapi-volume-68d9baf7-1e1f-494a-b5cd-59b1752a5fb7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:57:15.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5903" for this suite.

• [SLOW TEST:6.405 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2379,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:57:15.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 21:57:15.355: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5cebb454-c6cd-4686-be08-acbf82d1be20", Controller:(*bool)(0xc0052313a2), BlockOwnerDeletion:(*bool)(0xc0052313a3)}}
Jan  8 21:57:15.389: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b625d084-931a-40bf-98f3-30a4b6edc07f", Controller:(*bool)(0xc00531ff52), BlockOwnerDeletion:(*bool)(0xc00531ff53)}}
Jan  8 21:57:15.400: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"dabe2353-96a7-4cff-889e-1f60895c1538", Controller:(*bool)(0xc00526326a), BlockOwnerDeletion:(*bool)(0xc00526326b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:57:20.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7721" for this suite.

• [SLOW TEST:5.415 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":152,"skipped":2405,"failed":0}
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:57:20.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:57:20.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1341" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":153,"skipped":2405,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:57:20.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 21:57:21.626: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 21:57:23.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:57:26.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:57:28.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:57:29.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 21:57:31.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714117441, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 21:57:34.691: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:57:35.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5239" for this suite.
STEP: Destroying namespace "webhook-5239-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.328 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":154,"skipped":2415,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:57:35.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0108 21:58:05.425267       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 21:58:05.425: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:58:05.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5395" for this suite.

• [SLOW TEST:30.262 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":155,"skipped":2437,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:58:05.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:58:15.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7877" for this suite.

• [SLOW TEST:10.199 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2456,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:58:15.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0108 21:58:17.908612       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 21:58:17.908: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:58:17.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8184" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":157,"skipped":2483,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:58:17.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  8 21:58:18.760: INFO: Number of nodes with available pods: 0
Jan  8 21:58:18.760: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:19.778: INFO: Number of nodes with available pods: 0
Jan  8 21:58:19.778: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:21.275: INFO: Number of nodes with available pods: 0
Jan  8 21:58:21.275: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:21.923: INFO: Number of nodes with available pods: 0
Jan  8 21:58:21.923: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:22.981: INFO: Number of nodes with available pods: 0
Jan  8 21:58:22.981: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:23.836: INFO: Number of nodes with available pods: 0
Jan  8 21:58:23.836: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:25.128: INFO: Number of nodes with available pods: 0
Jan  8 21:58:25.128: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:26.304: INFO: Number of nodes with available pods: 0
Jan  8 21:58:26.304: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:26.790: INFO: Number of nodes with available pods: 0
Jan  8 21:58:26.790: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:27.829: INFO: Number of nodes with available pods: 0
Jan  8 21:58:27.830: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:28.776: INFO: Number of nodes with available pods: 0
Jan  8 21:58:28.776: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:29.776: INFO: Number of nodes with available pods: 2
Jan  8 21:58:29.776: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  8 21:58:29.816: INFO: Number of nodes with available pods: 1
Jan  8 21:58:29.817: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:30.831: INFO: Number of nodes with available pods: 1
Jan  8 21:58:30.831: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:31.839: INFO: Number of nodes with available pods: 1
Jan  8 21:58:31.839: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:32.832: INFO: Number of nodes with available pods: 1
Jan  8 21:58:32.832: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:33.861: INFO: Number of nodes with available pods: 1
Jan  8 21:58:33.861: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:34.838: INFO: Number of nodes with available pods: 1
Jan  8 21:58:34.838: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:35.841: INFO: Number of nodes with available pods: 1
Jan  8 21:58:35.841: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:36.832: INFO: Number of nodes with available pods: 1
Jan  8 21:58:36.832: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:37.841: INFO: Number of nodes with available pods: 1
Jan  8 21:58:37.841: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:38.833: INFO: Number of nodes with available pods: 1
Jan  8 21:58:38.833: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:39.833: INFO: Number of nodes with available pods: 1
Jan  8 21:58:39.834: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:40.832: INFO: Number of nodes with available pods: 1
Jan  8 21:58:40.832: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:41.832: INFO: Number of nodes with available pods: 1
Jan  8 21:58:41.832: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:42.837: INFO: Number of nodes with available pods: 1
Jan  8 21:58:42.837: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:43.832: INFO: Number of nodes with available pods: 1
Jan  8 21:58:43.832: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:44.839: INFO: Number of nodes with available pods: 1
Jan  8 21:58:44.839: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:45.952: INFO: Number of nodes with available pods: 1
Jan  8 21:58:45.952: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:46.831: INFO: Number of nodes with available pods: 1
Jan  8 21:58:46.831: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:47.835: INFO: Number of nodes with available pods: 1
Jan  8 21:58:47.835: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:48.867: INFO: Number of nodes with available pods: 1
Jan  8 21:58:48.868: INFO: Node jerma-node is running more than one daemon pod
Jan  8 21:58:49.835: INFO: Number of nodes with available pods: 2
Jan  8 21:58:49.835: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1055, will wait for the garbage collector to delete the pods
Jan  8 21:58:49.907: INFO: Deleting DaemonSet.extensions daemon-set took: 13.302531ms
Jan  8 21:58:50.207: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.901754ms
Jan  8 21:59:03.213: INFO: Number of nodes with available pods: 0
Jan  8 21:59:03.213: INFO: Number of running nodes: 0, number of available pods: 0
Jan  8 21:59:03.234: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1055/daemonsets","resourceVersion":"895489"},"items":null}

Jan  8 21:59:03.237: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1055/pods","resourceVersion":"895489"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 21:59:03.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1055" for this suite.

• [SLOW TEST:45.353 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":158,"skipped":2490,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 21:59:03.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2258
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-2258
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2258
Jan  8 21:59:03.452: INFO: Found 0 stateful pods, waiting for 1
Jan  8 21:59:13.463: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  8 21:59:13.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan  8 21:59:14.017: INFO: stderr: "I0108 21:59:13.736616    1730 log.go:172] (0xc000bbe6e0) (0xc000894280) Create stream\nI0108 21:59:13.736838    1730 log.go:172] (0xc000bbe6e0) (0xc000894280) Stream added, broadcasting: 1\nI0108 21:59:13.740325    1730 log.go:172] (0xc000bbe6e0) Reply frame received for 1\nI0108 21:59:13.740358    1730 log.go:172] (0xc000bbe6e0) (0xc000667b80) Create stream\nI0108 21:59:13.740369    1730 log.go:172] (0xc000bbe6e0) (0xc000667b80) Stream added, broadcasting: 3\nI0108 21:59:13.741600    1730 log.go:172] (0xc000bbe6e0) Reply frame received for 3\nI0108 21:59:13.741635    1730 log.go:172] (0xc000bbe6e0) (0xc000600780) Create stream\nI0108 21:59:13.741654    1730 log.go:172] (0xc000bbe6e0) (0xc000600780) Stream added, broadcasting: 5\nI0108 21:59:13.742803    1730 log.go:172] (0xc000bbe6e0) Reply frame received for 5\nI0108 21:59:13.836770    1730 log.go:172] (0xc000bbe6e0) Data frame received for 5\nI0108 21:59:13.836908    1730 log.go:172] (0xc000600780) (5) Data frame handling\nI0108 21:59:13.836939    1730 log.go:172] (0xc000600780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:59:13.876873    1730 log.go:172] (0xc000bbe6e0) Data frame received for 3\nI0108 21:59:13.877132    1730 log.go:172] (0xc000667b80) (3) Data frame handling\nI0108 21:59:13.877248    1730 log.go:172] (0xc000667b80) (3) Data frame sent\nI0108 21:59:13.990699    1730 log.go:172] (0xc000bbe6e0) (0xc000600780) Stream removed, broadcasting: 5\nI0108 21:59:13.991441    1730 log.go:172] (0xc000bbe6e0) Data frame received for 1\nI0108 21:59:13.991537    1730 log.go:172] (0xc000894280) (1) Data frame handling\nI0108 21:59:13.991609    1730 log.go:172] (0xc000894280) (1) Data frame sent\nI0108 21:59:13.991718    1730 log.go:172] (0xc000bbe6e0) (0xc000894280) Stream removed, broadcasting: 1\nI0108 21:59:13.992106    1730 log.go:172] (0xc000bbe6e0) (0xc000667b80) Stream removed, broadcasting: 3\nI0108 21:59:13.992245    1730 log.go:172] (0xc000bbe6e0) Go away received\nI0108 21:59:13.993522    1730 log.go:172] (0xc000bbe6e0) (0xc000894280) Stream removed, broadcasting: 1\nI0108 21:59:13.993556    1730 log.go:172] (0xc000bbe6e0) (0xc000667b80) Stream removed, broadcasting: 3\nI0108 21:59:13.993575    1730 log.go:172] (0xc000bbe6e0) (0xc000600780) Stream removed, broadcasting: 5\n"
Jan  8 21:59:14.017: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan  8 21:59:14.017: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan  8 21:59:14.027: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  8 21:59:24.032: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 21:59:24.032: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 21:59:24.842: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  8 21:59:24.842: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  }]
Jan  8 21:59:24.842: INFO: 
Jan  8 21:59:24.842: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  8 21:59:26.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.19863825s
Jan  8 21:59:27.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.957724225s
Jan  8 21:59:28.106: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.949804437s
Jan  8 21:59:29.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.934623329s
Jan  8 21:59:30.989: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.059511617s
Jan  8 21:59:31.995: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.052370683s
Jan  8 21:59:33.125: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.045887248s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2258
Jan  8 21:59:34.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 21:59:34.534: INFO: stderr: "I0108 21:59:34.327516    1753 log.go:172] (0xc0006129a0) (0xc000922000) Create stream\nI0108 21:59:34.327838    1753 log.go:172] (0xc0006129a0) (0xc000922000) Stream added, broadcasting: 1\nI0108 21:59:34.330865    1753 log.go:172] (0xc0006129a0) Reply frame received for 1\nI0108 21:59:34.330905    1753 log.go:172] (0xc0006129a0) (0xc000633c20) Create stream\nI0108 21:59:34.330914    1753 log.go:172] (0xc0006129a0) (0xc000633c20) Stream added, broadcasting: 3\nI0108 21:59:34.332182    1753 log.go:172] (0xc0006129a0) Reply frame received for 3\nI0108 21:59:34.332211    1753 log.go:172] (0xc0006129a0) (0xc000633e00) Create stream\nI0108 21:59:34.332218    1753 log.go:172] (0xc0006129a0) (0xc000633e00) Stream added, broadcasting: 5\nI0108 21:59:34.333718    1753 log.go:172] (0xc0006129a0) Reply frame received for 5\nI0108 21:59:34.409384    1753 log.go:172] (0xc0006129a0) Data frame received for 3\nI0108 21:59:34.409529    1753 log.go:172] (0xc000633c20) (3) Data frame handling\nI0108 21:59:34.409563    1753 log.go:172] (0xc000633c20) (3) Data frame sent\nI0108 21:59:34.409630    1753 log.go:172] (0xc0006129a0) Data frame received for 5\nI0108 21:59:34.409654    1753 log.go:172] (0xc000633e00) (5) Data frame handling\nI0108 21:59:34.409691    1753 log.go:172] (0xc000633e00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0108 21:59:34.513282    1753 log.go:172] (0xc0006129a0) Data frame received for 1\nI0108 21:59:34.513378    1753 log.go:172] (0xc000922000) (1) Data frame handling\nI0108 21:59:34.513403    1753 log.go:172] (0xc000922000) (1) Data frame sent\nI0108 21:59:34.513465    1753 log.go:172] (0xc0006129a0) (0xc000633c20) Stream removed, broadcasting: 3\nI0108 21:59:34.513540    1753 log.go:172] (0xc0006129a0) (0xc000922000) Stream removed, broadcasting: 1\nI0108 21:59:34.515006    1753 log.go:172] (0xc0006129a0) (0xc000633e00) Stream removed, broadcasting: 5\nI0108 21:59:34.515086    1753 log.go:172] (0xc0006129a0) (0xc000922000) Stream removed, broadcasting: 1\nI0108 21:59:34.515097    1753 log.go:172] (0xc0006129a0) (0xc000633c20) Stream removed, broadcasting: 3\nI0108 21:59:34.515106    1753 log.go:172] (0xc0006129a0) (0xc000633e00) Stream removed, broadcasting: 5\nI0108 21:59:34.516554    1753 log.go:172] (0xc0006129a0) Go away received\n"
Jan  8 21:59:34.534: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan  8 21:59:34.534: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan  8 21:59:34.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 21:59:35.008: INFO: stderr: "I0108 21:59:34.761431    1774 log.go:172] (0xc0000f4f20) (0xc0006719a0) Create stream\nI0108 21:59:34.761580    1774 log.go:172] (0xc0000f4f20) (0xc0006719a0) Stream added, broadcasting: 1\nI0108 21:59:34.763791    1774 log.go:172] (0xc0000f4f20) Reply frame received for 1\nI0108 21:59:34.763821    1774 log.go:172] (0xc0000f4f20) (0xc0008b8000) Create stream\nI0108 21:59:34.763829    1774 log.go:172] (0xc0000f4f20) (0xc0008b8000) Stream added, broadcasting: 3\nI0108 21:59:34.764675    1774 log.go:172] (0xc0000f4f20) Reply frame received for 3\nI0108 21:59:34.764690    1774 log.go:172] (0xc0000f4f20) (0xc000671b80) Create stream\nI0108 21:59:34.764696    1774 log.go:172] (0xc0000f4f20) (0xc000671b80) Stream added, broadcasting: 5\nI0108 21:59:34.765453    1774 log.go:172] (0xc0000f4f20) Reply frame received for 5\nI0108 21:59:34.861112    1774 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0108 21:59:34.861313    1774 log.go:172] (0xc000671b80) (5) Data frame handling\nI0108 21:59:34.861413    1774 log.go:172] (0xc000671b80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0108 21:59:34.864387    1774 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0108 21:59:34.864437    1774 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0108 21:59:34.864460    1774 log.go:172] (0xc0008b8000) (3) Data frame sent\nI0108 21:59:34.991410    1774 log.go:172] (0xc0000f4f20) Data frame received for 1\nI0108 21:59:34.991647    1774 log.go:172] (0xc0000f4f20) (0xc000671b80) Stream removed, broadcasting: 5\nI0108 21:59:34.991727    1774 log.go:172] (0xc0006719a0) (1) Data frame handling\nI0108 21:59:34.991750    1774 log.go:172] (0xc0006719a0) (1) Data frame sent\nI0108 21:59:34.992046    1774 log.go:172] (0xc0000f4f20) (0xc0006719a0) Stream removed, broadcasting: 1\nI0108 21:59:34.993313    1774 log.go:172] (0xc0000f4f20) (0xc0008b8000) Stream removed, broadcasting: 3\nI0108 21:59:34.993367    1774 log.go:172] (0xc0000f4f20) Go away received\nI0108 21:59:34.993875    1774 log.go:172] (0xc0000f4f20) (0xc0006719a0) Stream removed, broadcasting: 1\nI0108 21:59:34.993946    1774 log.go:172] (0xc0000f4f20) (0xc0008b8000) Stream removed, broadcasting: 3\nI0108 21:59:34.993966    1774 log.go:172] (0xc0000f4f20) (0xc000671b80) Stream removed, broadcasting: 5\n"
Jan  8 21:59:35.008: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan  8 21:59:35.008: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan  8 21:59:35.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 21:59:35.375: INFO: stderr: "I0108 21:59:35.209246    1789 log.go:172] (0xc0000f4bb0) (0xc000982000) Create stream\nI0108 21:59:35.209461    1789 log.go:172] (0xc0000f4bb0) (0xc000982000) Stream added, broadcasting: 1\nI0108 21:59:35.215705    1789 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0108 21:59:35.215755    1789 log.go:172] (0xc0000f4bb0) (0xc0009820a0) Create stream\nI0108 21:59:35.215762    1789 log.go:172] (0xc0000f4bb0) (0xc0009820a0) Stream added, broadcasting: 3\nI0108 21:59:35.218421    1789 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0108 21:59:35.218572    1789 log.go:172] (0xc0000f4bb0) (0xc0006ffb80) Create stream\nI0108 21:59:35.218593    1789 log.go:172] (0xc0000f4bb0) (0xc0006ffb80) Stream added, broadcasting: 5\nI0108 21:59:35.220427    1789 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0108 21:59:35.278628    1789 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0108 21:59:35.278664    1789 log.go:172] (0xc0006ffb80) (5) Data frame handling\nI0108 21:59:35.278678    1789 log.go:172] (0xc0006ffb80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0108 21:59:35.280021    1789 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0108 21:59:35.280072    1789 log.go:172] (0xc0009820a0) (3) Data frame handling\nI0108 21:59:35.280092    1789 log.go:172] (0xc0009820a0) (3) Data frame sent\nI0108 21:59:35.280145    1789 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0108 21:59:35.280159    1789 log.go:172] (0xc0006ffb80) (5) Data frame handling\nI0108 21:59:35.280167    1789 log.go:172] (0xc0006ffb80) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0108 21:59:35.281116    1789 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0108 21:59:35.281169    1789 log.go:172] (0xc0006ffb80) (5) Data frame handling\nI0108 21:59:35.281183    1789 log.go:172] (0xc0006ffb80) (5) Data frame sent\n+ true\nI0108 21:59:35.358784    1789 log.go:172] (0xc0000f4bb0) (0xc0006ffb80) Stream removed, broadcasting: 5\nI0108 21:59:35.358955    1789 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0108 21:59:35.359026    1789 log.go:172] (0xc0000f4bb0) (0xc0009820a0) Stream removed, broadcasting: 3\nI0108 21:59:35.359090    1789 log.go:172] (0xc000982000) (1) Data frame handling\nI0108 21:59:35.359122    1789 log.go:172] (0xc000982000) (1) Data frame sent\nI0108 21:59:35.359133    1789 log.go:172] (0xc0000f4bb0) (0xc000982000) Stream removed, broadcasting: 1\nI0108 21:59:35.359155    1789 log.go:172] (0xc0000f4bb0) Go away received\nI0108 21:59:35.360694    1789 log.go:172] (0xc0000f4bb0) (0xc000982000) Stream removed, broadcasting: 1\nI0108 21:59:35.360715    1789 log.go:172] (0xc0000f4bb0) (0xc0009820a0) Stream removed, broadcasting: 3\nI0108 21:59:35.360730    1789 log.go:172] (0xc0000f4bb0) (0xc0006ffb80) Stream removed, broadcasting: 5\n"
Jan  8 21:59:35.375: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan  8 21:59:35.375: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan  8 21:59:35.381: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 21:59:35.381: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 21:59:35.381: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  8 21:59:35.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan  8 21:59:35.686: INFO: stderr: "I0108 21:59:35.531615    1811 log.go:172] (0xc000923290) (0xc000882960) Create stream\nI0108 21:59:35.531968    1811 log.go:172] (0xc000923290) (0xc000882960) Stream added, broadcasting: 1\nI0108 21:59:35.541354    1811 log.go:172] (0xc000923290) Reply frame received for 1\nI0108 21:59:35.541440    1811 log.go:172] (0xc000923290) (0xc000686780) Create stream\nI0108 21:59:35.541459    1811 log.go:172] (0xc000923290) (0xc000686780) Stream added, broadcasting: 3\nI0108 21:59:35.542906    1811 log.go:172] (0xc000923290) Reply frame received for 3\nI0108 21:59:35.542950    1811 log.go:172] (0xc000923290) (0xc0004a9540) Create stream\nI0108 21:59:35.542960    1811 log.go:172] (0xc000923290) (0xc0004a9540) Stream added, broadcasting: 5\nI0108 21:59:35.544170    1811 log.go:172] (0xc000923290) Reply frame received for 5\nI0108 21:59:35.609558    1811 log.go:172] (0xc000923290) Data frame received for 5\nI0108 21:59:35.609606    1811 log.go:172] (0xc0004a9540) (5) Data frame handling\nI0108 21:59:35.609622    1811 log.go:172] (0xc0004a9540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:59:35.610482    1811 log.go:172] (0xc000923290) Data frame received for 3\nI0108 21:59:35.610497    1811 log.go:172] (0xc000686780) (3) Data frame handling\nI0108 21:59:35.610511    1811 log.go:172] (0xc000686780) (3) Data frame sent\nI0108 21:59:35.677175    1811 log.go:172] (0xc000923290) (0xc000686780) Stream removed, broadcasting: 3\nI0108 21:59:35.677402    1811 log.go:172] (0xc000923290) Data frame received for 1\nI0108 21:59:35.677453    1811 log.go:172] (0xc000923290) (0xc0004a9540) Stream removed, broadcasting: 5\nI0108 21:59:35.677526    1811 log.go:172] (0xc000882960) (1) Data frame handling\nI0108 21:59:35.677580    1811 log.go:172] (0xc000882960) (1) Data frame sent\nI0108 21:59:35.677599    1811 log.go:172] (0xc000923290) (0xc000882960) Stream removed, broadcasting: 1\nI0108 21:59:35.677672    1811 log.go:172] (0xc000923290) Go away received\nI0108 21:59:35.679154    1811 log.go:172] (0xc000923290) (0xc000882960) Stream removed, broadcasting: 1\nI0108 21:59:35.679171    1811 log.go:172] (0xc000923290) (0xc000686780) Stream removed, broadcasting: 3\nI0108 21:59:35.679177    1811 log.go:172] (0xc000923290) (0xc0004a9540) Stream removed, broadcasting: 5\n"
Jan  8 21:59:35.687: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan  8 21:59:35.687: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan  8 21:59:35.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan  8 21:59:36.114: INFO: stderr: "I0108 21:59:35.901400    1830 log.go:172] (0xc00096ec60) (0xc00094e280) Create stream\nI0108 21:59:35.902798    1830 log.go:172] (0xc00096ec60) (0xc00094e280) Stream added, broadcasting: 1\nI0108 21:59:35.922988    1830 log.go:172] (0xc00096ec60) Reply frame received for 1\nI0108 21:59:35.923113    1830 log.go:172] (0xc00096ec60) (0xc0009fa0a0) Create stream\nI0108 21:59:35.923134    1830 log.go:172] (0xc00096ec60) (0xc0009fa0a0) Stream added, broadcasting: 3\nI0108 21:59:35.924664    1830 log.go:172] (0xc00096ec60) Reply frame received for 3\nI0108 21:59:35.924701    1830 log.go:172] (0xc00096ec60) (0xc00094e320) Create stream\nI0108 21:59:35.924716    1830 log.go:172] (0xc00096ec60) (0xc00094e320) Stream added, broadcasting: 5\nI0108 21:59:35.927731    1830 log.go:172] (0xc00096ec60) Reply frame received for 5\nI0108 21:59:35.993897    1830 log.go:172] (0xc00096ec60) Data frame received for 5\nI0108 21:59:35.993981    1830 log.go:172] (0xc00094e320) (5) Data frame handling\nI0108 21:59:35.993998    1830 log.go:172] (0xc00094e320) (5) Data frame sent\nI0108 21:59:35.994007    1830 log.go:172] (0xc00096ec60) Data frame received for 5\nI0108 21:59:35.994014    1830 log.go:172] (0xc00094e320) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:59:35.994083    1830 log.go:172] (0xc00094e320) (5) Data frame sent\nI0108 21:59:36.012967    1830 log.go:172] (0xc00096ec60) Data frame received for 3\nI0108 21:59:36.012983    1830 log.go:172] (0xc0009fa0a0) (3) Data frame handling\nI0108 21:59:36.013002    1830 log.go:172] (0xc0009fa0a0) (3) Data frame sent\nI0108 21:59:36.099173    1830 log.go:172] (0xc00096ec60) Data frame received for 1\nI0108 21:59:36.099341    1830 log.go:172] (0xc00096ec60) (0xc00094e320) Stream removed, broadcasting: 5\nI0108 21:59:36.099400    1830 log.go:172] (0xc00094e280) (1) Data frame handling\nI0108 21:59:36.099453    1830 log.go:172] (0xc00094e280) (1) Data frame sent\nI0108 21:59:36.099547    1830 log.go:172] (0xc00096ec60) (0xc00094e280) Stream removed, broadcasting: 1\nI0108 21:59:36.099609    1830 log.go:172] (0xc00096ec60) (0xc0009fa0a0) Stream removed, broadcasting: 3\nI0108 21:59:36.099698    1830 log.go:172] (0xc00096ec60) Go away received\nI0108 21:59:36.103029    1830 log.go:172] (0xc00096ec60) (0xc00094e280) Stream removed, broadcasting: 1\nI0108 21:59:36.103076    1830 log.go:172] (0xc00096ec60) (0xc0009fa0a0) Stream removed, broadcasting: 3\nI0108 21:59:36.103102    1830 log.go:172] (0xc00096ec60) (0xc00094e320) Stream removed, broadcasting: 5\n"
Jan  8 21:59:36.114: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan  8 21:59:36.114: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan  8 21:59:36.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan  8 21:59:36.512: INFO: stderr: "I0108 21:59:36.296326    1849 log.go:172] (0xc0009e4fd0) (0xc00090c3c0) Create stream\nI0108 21:59:36.296496    1849 log.go:172] (0xc0009e4fd0) (0xc00090c3c0) Stream added, broadcasting: 1\nI0108 21:59:36.300043    1849 log.go:172] (0xc0009e4fd0) Reply frame received for 1\nI0108 21:59:36.300099    1849 log.go:172] (0xc0009e4fd0) (0xc000b56140) Create stream\nI0108 21:59:36.300118    1849 log.go:172] (0xc0009e4fd0) (0xc000b56140) Stream added, broadcasting: 3\nI0108 21:59:36.301213    1849 log.go:172] (0xc0009e4fd0) Reply frame received for 3\nI0108 21:59:36.301233    1849 log.go:172] (0xc0009e4fd0) (0xc00090c460) Create stream\nI0108 21:59:36.301244    1849 log.go:172] (0xc0009e4fd0) (0xc00090c460) Stream added, broadcasting: 5\nI0108 21:59:36.303082    1849 log.go:172] (0xc0009e4fd0) Reply frame received for 5\nI0108 21:59:36.353465    1849 log.go:172] (0xc0009e4fd0) Data frame received for 5\nI0108 21:59:36.353749    1849 log.go:172] (0xc00090c460) (5) Data frame handling\nI0108 21:59:36.353793    1849 log.go:172] (0xc00090c460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0108 21:59:36.379476    1849 log.go:172] (0xc0009e4fd0) Data frame received for 3\nI0108 21:59:36.379514    1849 log.go:172] (0xc000b56140) (3) Data frame handling\nI0108 21:59:36.379538    1849 log.go:172] (0xc000b56140) (3) Data frame sent\nI0108 21:59:36.487013    1849 log.go:172] (0xc0009e4fd0) Data frame received for 1\nI0108 21:59:36.487577    1849 log.go:172] (0xc0009e4fd0) (0xc000b56140) Stream removed, broadcasting: 3\nI0108 21:59:36.487773    1849 log.go:172] (0xc00090c3c0) (1) Data frame handling\nI0108 21:59:36.487840    1849 log.go:172] (0xc00090c3c0) (1) Data frame sent\nI0108 21:59:36.488185    1849 log.go:172] (0xc0009e4fd0) (0xc00090c460) Stream removed, broadcasting: 5\nI0108 21:59:36.488703    1849 log.go:172] (0xc0009e4fd0) (0xc00090c3c0) Stream removed, broadcasting: 1\nI0108 21:59:36.488762    1849 log.go:172] (0xc0009e4fd0) Go away received\nI0108 21:59:36.490896    1849 log.go:172] (0xc0009e4fd0) (0xc00090c3c0) Stream removed, broadcasting: 1\nI0108 21:59:36.490938    1849 log.go:172] (0xc0009e4fd0) (0xc000b56140) Stream removed, broadcasting: 3\nI0108 21:59:36.490954    1849 log.go:172] (0xc0009e4fd0) (0xc00090c460) Stream removed, broadcasting: 5\n"
Jan  8 21:59:36.512: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan  8 21:59:36.512: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan  8 21:59:36.512: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 21:59:36.548: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  8 21:59:46.562: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 21:59:46.562: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 21:59:46.562: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 21:59:46.578: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:46.578: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  }]
Jan  8 21:59:46.578: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:46.578: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:46.578: INFO: 
Jan  8 21:59:46.579: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 21:59:48.410: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:48.410: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  }]
Jan  8 21:59:48.411: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:48.411: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:48.411: INFO: 
Jan  8 21:59:48.411: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 21:59:49.440: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:49.440: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  }]
Jan  8 21:59:49.440: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:49.440: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:49.440: INFO: 
Jan  8 21:59:49.440: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 21:59:50.492: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:50.492: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  }]
Jan  8 21:59:50.492: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:50.492: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:50.492: INFO: 
Jan  8 21:59:50.492: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 21:59:51.499: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:51.499: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:03 +0000 UTC  }]
Jan  8 21:59:51.499: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:51.499: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:51.499: INFO: 
Jan  8 21:59:51.499: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 21:59:52.513: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:52.514: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:52.514: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:52.514: INFO: 
Jan  8 21:59:52.514: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  8 21:59:53.522: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:53.522: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:53.522: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:53.522: INFO: 
Jan  8 21:59:53.522: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  8 21:59:54.537: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:54.537: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:54.538: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:54.538: INFO: 
Jan  8 21:59:54.538: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  8 21:59:55.551: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:55.551: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:55.551: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:55.552: INFO: 
Jan  8 21:59:55.552: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  8 21:59:56.561: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 21:59:56.561: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:56.561: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 21:59:24 +0000 UTC  }]
Jan  8 21:59:56.561: INFO: 
Jan  8 21:59:56.561: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2258
Jan  8 21:59:57.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 21:59:57.747: INFO: rc: 1
Jan  8 21:59:57.747: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan  8 22:00:07.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:00:07.984: INFO: rc: 1
Jan  8 22:00:07.984: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:00:17.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:00:18.184: INFO: rc: 1
Jan  8 22:00:18.184: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:00:28.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:00:28.364: INFO: rc: 1
Jan  8 22:00:28.365: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:00:38.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:00:38.532: INFO: rc: 1
Jan  8 22:00:38.533: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:00:48.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:00:48.709: INFO: rc: 1
Jan  8 22:00:48.710: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:00:58.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:00:58.930: INFO: rc: 1
Jan  8 22:00:58.930: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:01:08.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:01:09.180: INFO: rc: 1
Jan  8 22:01:09.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:01:19.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:01:19.505: INFO: rc: 1
Jan  8 22:01:19.505: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:01:29.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:01:29.684: INFO: rc: 1
Jan  8 22:01:29.685: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:01:39.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:01:39.914: INFO: rc: 1
Jan  8 22:01:39.914: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:01:49.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:01:50.056: INFO: rc: 1
Jan  8 22:01:50.056: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:02:00.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:02:00.229: INFO: rc: 1
Jan  8 22:02:00.230: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:02:10.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:02:10.516: INFO: rc: 1
Jan  8 22:02:10.517: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:02:20.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:02:20.695: INFO: rc: 1
Jan  8 22:02:20.695: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:02:30.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:02:30.887: INFO: rc: 1
Jan  8 22:02:30.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:02:40.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:02:41.069: INFO: rc: 1
Jan  8 22:02:41.069: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:02:51.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:02:51.234: INFO: rc: 1
Jan  8 22:02:51.235: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:03:01.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:03:01.460: INFO: rc: 1
Jan  8 22:03:01.460: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:03:11.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:03:11.666: INFO: rc: 1
Jan  8 22:03:11.667: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:03:21.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:03:21.868: INFO: rc: 1
Jan  8 22:03:21.868: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:03:31.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:03:32.070: INFO: rc: 1
Jan  8 22:03:32.070: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:03:42.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:03:42.264: INFO: rc: 1
Jan  8 22:03:42.264: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:03:52.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:03:52.489: INFO: rc: 1
Jan  8 22:03:52.490: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:04:02.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:04:02.670: INFO: rc: 1
Jan  8 22:04:02.670: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:04:12.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:04:12.843: INFO: rc: 1
Jan  8 22:04:12.843: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:04:22.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:04:23.040: INFO: rc: 1
Jan  8 22:04:23.040: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:04:33.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:04:33.249: INFO: rc: 1
Jan  8 22:04:33.250: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:04:43.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:04:43.441: INFO: rc: 1
Jan  8 22:04:43.441: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:04:53.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:04:53.574: INFO: rc: 1
Jan  8 22:04:53.574: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  8 22:05:03.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2258 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan  8 22:05:03.761: INFO: rc: 1
Jan  8 22:05:03.761: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Jan  8 22:05:03.761: INFO: Scaling statefulset ss to 0
Jan  8 22:05:03.778: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan  8 22:05:03.784: INFO: Deleting all statefulset in ns statefulset-2258
Jan  8 22:05:03.790: INFO: Scaling statefulset ss to 0
Jan  8 22:05:03.807: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 22:05:03.809: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:05:03.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2258" for this suite.

• [SLOW TEST:360.606 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":159,"skipped":2608,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:05:03.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan  8 22:05:03.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:05:21.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5996" for this suite.

• [SLOW TEST:17.502 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":160,"skipped":2612,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:05:21.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-5c4d2cfe-ce86-42c8-a782-e11869f089ab
STEP: Creating configMap with name cm-test-opt-upd-1e942e98-3fc2-4546-8591-38620b9dc758
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5c4d2cfe-ce86-42c8-a782-e11869f089ab
STEP: Updating configmap cm-test-opt-upd-1e942e98-3fc2-4546-8591-38620b9dc758
STEP: Creating configMap with name cm-test-opt-create-9291d1e6-2d14-4b4a-9cdc-d8f4a9c6cdde
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:05:34.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9519" for this suite.

• [SLOW TEST:13.159 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2625,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:05:34.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:05:34.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac" in namespace "downward-api-6766" to be "success or failure"
Jan  8 22:05:34.678: INFO: Pod "downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac": Phase="Pending", Reason="", readiness=false. Elapsed: 57.711329ms
Jan  8 22:05:36.687: INFO: Pod "downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066286631s
Jan  8 22:05:38.699: INFO: Pod "downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079123355s
Jan  8 22:05:40.705: INFO: Pod "downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08517227s
Jan  8 22:05:42.746: INFO: Pod "downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125687647s
STEP: Saw pod success
Jan  8 22:05:42.746: INFO: Pod "downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac" satisfied condition "success or failure"
Jan  8 22:05:42.762: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac container client-container: 
STEP: delete the pod
Jan  8 22:05:42.788: INFO: Waiting for pod downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac to disappear
Jan  8 22:05:42.826: INFO: Pod downwardapi-volume-402b3def-c305-49f9-9af4-3ddea9d318ac no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:05:42.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6766" for this suite.

• [SLOW TEST:8.284 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2629,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:05:42.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1938.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1938.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1938.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1938.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1938.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1938.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 22:05:55.038: INFO: DNS probes using dns-1938/dns-test-3d56e5c8-e5fa-4eef-bb7b-149a61533ae5 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:05:55.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1938" for this suite.

• [SLOW TEST:12.255 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":163,"skipped":2633,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:05:55.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:06:05.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5073" for this suite.

• [SLOW TEST:10.369 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":164,"skipped":2694,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:06:05.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  8 22:06:05.584: INFO: Waiting up to 5m0s for pod "pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba" in namespace "emptydir-671" to be "success or failure"
Jan  8 22:06:05.608: INFO: Pod "pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba": Phase="Pending", Reason="", readiness=false. Elapsed: 23.660009ms
Jan  8 22:06:07.614: INFO: Pod "pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029730262s
Jan  8 22:06:09.625: INFO: Pod "pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040302433s
Jan  8 22:06:11.630: INFO: Pod "pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045918627s
Jan  8 22:06:13.648: INFO: Pod "pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063374296s
Jan  8 22:06:15.658: INFO: Pod "pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073677245s
STEP: Saw pod success
Jan  8 22:06:15.658: INFO: Pod "pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba" satisfied condition "success or failure"
Jan  8 22:06:15.663: INFO: Trying to get logs from node jerma-node pod pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba container test-container: 
STEP: delete the pod
Jan  8 22:06:15.724: INFO: Waiting for pod pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba to disappear
Jan  8 22:06:15.749: INFO: Pod pod-6aeffa56-4e13-4b18-8bae-7d72a0ee23ba no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:06:15.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-671" for this suite.

• [SLOW TEST:10.299 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2702,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:06:15.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:06:16.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7443" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2722,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:06:16.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e8f8990b-eeec-4dad-a801-abecbdd87bd3
STEP: Creating a pod to test consume secrets
Jan  8 22:06:16.255: INFO: Waiting up to 5m0s for pod "pod-secrets-7c8835d3-7ca0-4a84-a398-d888d7a4c474" in namespace "secrets-570" to be "success or failure"
Jan  8 22:06:16.275: INFO: Pod "pod-secrets-7c8835d3-7ca0-4a84-a398-d888d7a4c474": Phase="Pending", Reason="", readiness=false. Elapsed: 20.234732ms
Jan  8 22:06:18.281: INFO: Pod "pod-secrets-7c8835d3-7ca0-4a84-a398-d888d7a4c474": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026473105s
Jan  8 22:06:20.290: INFO: Pod "pod-secrets-7c8835d3-7ca0-4a84-a398-d888d7a4c474": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034584738s
Jan  8 22:06:22.320: INFO: Pod "pod-secrets-7c8835d3-7ca0-4a84-a398-d888d7a4c474": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064630012s
STEP: Saw pod success
Jan  8 22:06:22.320: INFO: Pod "pod-secrets-7c8835d3-7ca0-4a84-a398-d888d7a4c474" satisfied condition "success or failure"
Jan  8 22:06:22.323: INFO: Trying to get logs from node jerma-node pod pod-secrets-7c8835d3-7ca0-4a84-a398-d888d7a4c474 container secret-volume-test: 
STEP: delete the pod
Jan  8 22:06:22.361: INFO: Waiting for pod pod-secrets-7c8835d3-7ca0-4a84-a398-d888d7a4c474 to disappear
Jan  8 22:06:22.368: INFO: Pod pod-secrets-7c8835d3-7ca0-4a84-a398-d888d7a4c474 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:06:22.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-570" for this suite.

• [SLOW TEST:6.337 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2724,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:06:22.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:06:22.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421" in namespace "projected-8270" to be "success or failure"
Jan  8 22:06:22.520: INFO: Pod "downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421": Phase="Pending", Reason="", readiness=false. Elapsed: 11.434111ms
Jan  8 22:06:24.527: INFO: Pod "downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018382027s
Jan  8 22:06:26.561: INFO: Pod "downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052462128s
Jan  8 22:06:28.573: INFO: Pod "downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064223874s
Jan  8 22:06:30.581: INFO: Pod "downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072464332s
STEP: Saw pod success
Jan  8 22:06:30.581: INFO: Pod "downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421" satisfied condition "success or failure"
Jan  8 22:06:30.588: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421 container client-container: 
STEP: delete the pod
Jan  8 22:06:30.773: INFO: Waiting for pod downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421 to disappear
Jan  8 22:06:30.790: INFO: Pod downwardapi-volume-6af0013c-6629-4c8a-b79e-35d2ada88421 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:06:30.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8270" for this suite.

• [SLOW TEST:8.423 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2742,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:06:30.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan  8 22:06:30.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-471'
Jan  8 22:06:33.028: INFO: stderr: ""
Jan  8 22:06:33.028: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jan  8 22:06:43.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-471 -o json'
Jan  8 22:06:43.230: INFO: stderr: ""
Jan  8 22:06:43.230: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-08T22:06:33Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-471\",\n        \"resourceVersion\": \"896975\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-471/pods/e2e-test-httpd-pod\",\n        \"uid\": \"85b892ac-6439-4faf-96a0-45a840b750c8\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-s56cs\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-s56cs\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-s56cs\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-08T22:06:33Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-08T22:06:39Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-08T22:06:39Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-08T22:06:33Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://74062d2513fe7acc338b2c78fa24dac73e074bc690530055378099e8a8f38b3d\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-08T22:06:38Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-08T22:06:33Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  8 22:06:43.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-471'
Jan  8 22:06:43.772: INFO: stderr: ""
Jan  8 22:06:43.772: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Jan  8 22:06:43.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-471'
Jan  8 22:06:49.447: INFO: stderr: ""
Jan  8 22:06:49.447: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:06:49.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-471" for this suite.

• [SLOW TEST:18.684 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":169,"skipped":2782,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:06:49.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Jan  8 22:06:49.559: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:06:49.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6551" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":170,"skipped":2787,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:06:49.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-b26be0a7-6a97-4f10-a19b-4f35bb2e3e0e
STEP: Creating a pod to test consume secrets
Jan  8 22:06:49.847: INFO: Waiting up to 5m0s for pod "pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651" in namespace "secrets-6666" to be "success or failure"
Jan  8 22:06:49.854: INFO: Pod "pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651": Phase="Pending", Reason="", readiness=false. Elapsed: 6.670497ms
Jan  8 22:06:51.872: INFO: Pod "pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025215111s
Jan  8 22:06:53.885: INFO: Pod "pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037665141s
Jan  8 22:06:55.890: INFO: Pod "pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043502616s
Jan  8 22:06:57.897: INFO: Pod "pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050248079s
STEP: Saw pod success
Jan  8 22:06:57.897: INFO: Pod "pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651" satisfied condition "success or failure"
Jan  8 22:06:57.901: INFO: Trying to get logs from node jerma-node pod pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651 container secret-volume-test: 
STEP: delete the pod
Jan  8 22:06:57.964: INFO: Waiting for pod pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651 to disappear
Jan  8 22:06:57.972: INFO: Pod pod-secrets-214609b6-110e-4092-ab3f-af0150b0e651 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:06:57.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6666" for this suite.

• [SLOW TEST:8.318 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2790,"failed":0}
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:06:57.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:07:02.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6211" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":172,"skipped":2790,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:07:02.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:07:03.156: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 22:07:05.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:07:07.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:07:09.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118023, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:07:12.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:07:12.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8899" for this suite.
STEP: Destroying namespace "webhook-8899-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.984 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":173,"skipped":2799,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:07:12.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0108 22:07:24.993817       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 22:07:24.993: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:07:24.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7184" for this suite.

• [SLOW TEST:12.491 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":174,"skipped":2821,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:07:25.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-1b459019-dc64-48de-8bc8-312fb2ba9965
STEP: Creating a pod to test consume configMaps
Jan  8 22:07:25.100: INFO: Waiting up to 5m0s for pod "pod-configmaps-d41a6ec2-6aee-4739-9606-c143e9ee8819" in namespace "configmap-83" to be "success or failure"
Jan  8 22:07:25.104: INFO: Pod "pod-configmaps-d41a6ec2-6aee-4739-9606-c143e9ee8819": Phase="Pending", Reason="", readiness=false. Elapsed: 3.955166ms
Jan  8 22:07:27.110: INFO: Pod "pod-configmaps-d41a6ec2-6aee-4739-9606-c143e9ee8819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010498s
Jan  8 22:07:29.116: INFO: Pod "pod-configmaps-d41a6ec2-6aee-4739-9606-c143e9ee8819": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016503555s
Jan  8 22:07:31.140: INFO: Pod "pod-configmaps-d41a6ec2-6aee-4739-9606-c143e9ee8819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04053994s
STEP: Saw pod success
Jan  8 22:07:31.140: INFO: Pod "pod-configmaps-d41a6ec2-6aee-4739-9606-c143e9ee8819" satisfied condition "success or failure"
Jan  8 22:07:31.144: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d41a6ec2-6aee-4739-9606-c143e9ee8819 container configmap-volume-test: 
STEP: delete the pod
Jan  8 22:07:31.170: INFO: Waiting for pod pod-configmaps-d41a6ec2-6aee-4739-9606-c143e9ee8819 to disappear
Jan  8 22:07:31.177: INFO: Pod pod-configmaps-d41a6ec2-6aee-4739-9606-c143e9ee8819 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:07:31.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-83" for this suite.

• [SLOW TEST:6.184 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2836,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:07:31.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-7f4dac34-66f5-4a4b-a26c-4405411dca76 in namespace container-probe-1575
Jan  8 22:07:39.303: INFO: Started pod busybox-7f4dac34-66f5-4a4b-a26c-4405411dca76 in namespace container-probe-1575
STEP: checking the pod's current state and verifying that restartCount is present
Jan  8 22:07:39.309: INFO: Initial restart count of pod busybox-7f4dac34-66f5-4a4b-a26c-4405411dca76 is 0
Jan  8 22:08:29.555: INFO: Restart count of pod container-probe-1575/busybox-7f4dac34-66f5-4a4b-a26c-4405411dca76 is now 1 (50.245632045s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:08:29.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1575" for this suite.

• [SLOW TEST:58.442 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2846,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:08:29.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-638ba243-1212-41f4-8804-51ce5854f87d
STEP: Creating a pod to test consume configMaps
Jan  8 22:08:29.951: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27" in namespace "projected-3564" to be "success or failure"
Jan  8 22:08:29.961: INFO: Pod "pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27": Phase="Pending", Reason="", readiness=false. Elapsed: 9.893664ms
Jan  8 22:08:31.967: INFO: Pod "pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015388664s
Jan  8 22:08:33.981: INFO: Pod "pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029251806s
Jan  8 22:08:35.992: INFO: Pod "pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041014866s
Jan  8 22:08:37.998: INFO: Pod "pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047045359s
Jan  8 22:08:40.005: INFO: Pod "pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054229395s
STEP: Saw pod success
Jan  8 22:08:40.006: INFO: Pod "pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27" satisfied condition "success or failure"
Jan  8 22:08:40.009: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 22:08:40.040: INFO: Waiting for pod pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27 to disappear
Jan  8 22:08:40.082: INFO: Pod pod-projected-configmaps-754ec8c5-42fb-4ef4-80d0-2950c0525e27 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:08:40.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3564" for this suite.

• [SLOW TEST:10.467 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2869,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:08:40.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1044 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1044;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1044 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1044;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1044.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1044.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1044.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1044.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1044.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1044.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1044.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1044.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1044.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1044.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1044.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 157.185.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.185.157_udp@PTR;check="$$(dig +tcp +noall +answer +search 157.185.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.185.157_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1044 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1044;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1044 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1044;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1044.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1044.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1044.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1044.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1044.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1044.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1044.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1044.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1044.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1044.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1044.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1044.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 157.185.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.185.157_udp@PTR;check="$$(dig +tcp +noall +answer +search 157.185.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.185.157_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 22:08:50.328: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.334: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.340: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.344: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.350: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.354: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.359: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.363: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.396: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.399: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.403: INFO: Unable to read jessie_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.406: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.409: INFO: Unable to read jessie_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.417: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.423: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:50.446: INFO: Lookups using dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1044 wheezy_tcp@dns-test-service.dns-1044 wheezy_udp@dns-test-service.dns-1044.svc wheezy_tcp@dns-test-service.dns-1044.svc wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1044 jessie_tcp@dns-test-service.dns-1044 jessie_udp@dns-test-service.dns-1044.svc jessie_tcp@dns-test-service.dns-1044.svc jessie_udp@_http._tcp.dns-test-service.dns-1044.svc jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc]

Jan  8 22:08:55.466: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.477: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.484: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.494: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.498: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.502: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.507: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.517: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.591: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.600: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.614: INFO: Unable to read jessie_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.636: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.649: INFO: Unable to read jessie_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.668: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.681: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.688: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:08:55.722: INFO: Lookups using dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1044 wheezy_tcp@dns-test-service.dns-1044 wheezy_udp@dns-test-service.dns-1044.svc wheezy_tcp@dns-test-service.dns-1044.svc wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1044 jessie_tcp@dns-test-service.dns-1044 jessie_udp@dns-test-service.dns-1044.svc jessie_tcp@dns-test-service.dns-1044.svc jessie_udp@_http._tcp.dns-test-service.dns-1044.svc jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc]

Jan  8 22:09:00.465: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.480: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.489: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.500: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.507: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.515: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.521: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.527: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.563: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.568: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.576: INFO: Unable to read jessie_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.582: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.587: INFO: Unable to read jessie_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.591: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.594: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.598: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:00.623: INFO: Lookups using dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1044 wheezy_tcp@dns-test-service.dns-1044 wheezy_udp@dns-test-service.dns-1044.svc wheezy_tcp@dns-test-service.dns-1044.svc wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1044 jessie_tcp@dns-test-service.dns-1044 jessie_udp@dns-test-service.dns-1044.svc jessie_tcp@dns-test-service.dns-1044.svc jessie_udp@_http._tcp.dns-test-service.dns-1044.svc jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc]

Jan  8 22:09:05.455: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.460: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.465: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.475: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.479: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.483: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.488: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.521: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.525: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.530: INFO: Unable to read jessie_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.534: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.537: INFO: Unable to read jessie_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.541: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.544: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.548: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:05.570: INFO: Lookups using dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1044 wheezy_tcp@dns-test-service.dns-1044 wheezy_udp@dns-test-service.dns-1044.svc wheezy_tcp@dns-test-service.dns-1044.svc wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1044 jessie_tcp@dns-test-service.dns-1044 jessie_udp@dns-test-service.dns-1044.svc jessie_tcp@dns-test-service.dns-1044.svc jessie_udp@_http._tcp.dns-test-service.dns-1044.svc jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc]

Jan  8 22:09:10.458: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.466: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.472: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.478: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.484: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.490: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.495: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.500: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.533: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.538: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.542: INFO: Unable to read jessie_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.545: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.549: INFO: Unable to read jessie_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.552: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.556: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.563: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:10.586: INFO: Lookups using dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1044 wheezy_tcp@dns-test-service.dns-1044 wheezy_udp@dns-test-service.dns-1044.svc wheezy_tcp@dns-test-service.dns-1044.svc wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1044 jessie_tcp@dns-test-service.dns-1044 jessie_udp@dns-test-service.dns-1044.svc jessie_tcp@dns-test-service.dns-1044.svc jessie_udp@_http._tcp.dns-test-service.dns-1044.svc jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc]

Jan  8 22:09:15.454: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.458: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.465: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.468: INFO: Unable to read wheezy_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.470: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.473: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.476: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.507: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.510: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.513: INFO: Unable to read jessie_udp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.516: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044 from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.519: INFO: Unable to read jessie_udp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.523: INFO: Unable to read jessie_tcp@dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.525: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.528: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc from pod dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8: the server could not find the requested resource (get pods dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8)
Jan  8 22:09:15.543: INFO: Lookups using dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1044 wheezy_tcp@dns-test-service.dns-1044 wheezy_udp@dns-test-service.dns-1044.svc wheezy_tcp@dns-test-service.dns-1044.svc wheezy_udp@_http._tcp.dns-test-service.dns-1044.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1044.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1044 jessie_tcp@dns-test-service.dns-1044 jessie_udp@dns-test-service.dns-1044.svc jessie_tcp@dns-test-service.dns-1044.svc jessie_udp@_http._tcp.dns-test-service.dns-1044.svc jessie_tcp@_http._tcp.dns-test-service.dns-1044.svc]

Jan  8 22:09:20.624: INFO: DNS probes using dns-1044/dns-test-e1d1ca94-aefc-452d-a7ef-f747a58507a8 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:09:21.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1044" for this suite.

• [SLOW TEST:41.026 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":178,"skipped":2915,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:09:21.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan  8 22:09:29.802: INFO: Successfully updated pod "labelsupdate33ddc69c-d29f-4357-a046-22b649888ea0"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:09:31.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8377" for this suite.

• [SLOW TEST:10.785 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2923,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:09:31.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:09:40.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4623" for this suite.

• [SLOW TEST:8.198 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2927,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:09:40.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-8047
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-8047
STEP: creating replication controller externalsvc in namespace services-8047
I0108 22:09:40.424904       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8047, replica count: 2
I0108 22:09:43.476227       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:09:46.477214       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:09:49.477720       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jan  8 22:09:49.540: INFO: Creating new exec pod
Jan  8 22:09:55.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8047 execpodmck7v -- /bin/sh -x -c nslookup nodeport-service'
Jan  8 22:09:56.076: INFO: stderr: "I0108 22:09:55.874852    2571 log.go:172] (0xc000b760b0) (0xc000709cc0) Create stream\nI0108 22:09:55.875178    2571 log.go:172] (0xc000b760b0) (0xc000709cc0) Stream added, broadcasting: 1\nI0108 22:09:55.880681    2571 log.go:172] (0xc000b760b0) Reply frame received for 1\nI0108 22:09:55.880786    2571 log.go:172] (0xc000b760b0) (0xc0004ff400) Create stream\nI0108 22:09:55.880826    2571 log.go:172] (0xc000b760b0) (0xc0004ff400) Stream added, broadcasting: 3\nI0108 22:09:55.883112    2571 log.go:172] (0xc000b760b0) Reply frame received for 3\nI0108 22:09:55.883167    2571 log.go:172] (0xc000b760b0) (0xc000709d60) Create stream\nI0108 22:09:55.883179    2571 log.go:172] (0xc000b760b0) (0xc000709d60) Stream added, broadcasting: 5\nI0108 22:09:55.885870    2571 log.go:172] (0xc000b760b0) Reply frame received for 5\nI0108 22:09:55.975500    2571 log.go:172] (0xc000b760b0) Data frame received for 5\nI0108 22:09:55.975624    2571 log.go:172] (0xc000709d60) (5) Data frame handling\nI0108 22:09:55.975685    2571 log.go:172] (0xc000709d60) (5) Data frame sent\n+ nslookup nodeport-service\nI0108 22:09:55.986359    2571 log.go:172] (0xc000b760b0) Data frame received for 3\nI0108 22:09:55.986400    2571 log.go:172] (0xc0004ff400) (3) Data frame handling\nI0108 22:09:55.986424    2571 log.go:172] (0xc0004ff400) (3) Data frame sent\nI0108 22:09:55.988383    2571 log.go:172] (0xc000b760b0) Data frame received for 3\nI0108 22:09:55.988395    2571 log.go:172] (0xc0004ff400) (3) Data frame handling\nI0108 22:09:55.988403    2571 log.go:172] (0xc0004ff400) (3) Data frame sent\nI0108 22:09:56.062952    2571 log.go:172] (0xc000b760b0) (0xc0004ff400) Stream removed, broadcasting: 3\nI0108 22:09:56.063071    2571 log.go:172] (0xc000b760b0) Data frame received for 1\nI0108 22:09:56.063093    2571 log.go:172] (0xc000709cc0) (1) Data frame handling\nI0108 22:09:56.063111    2571 log.go:172] (0xc000709cc0) (1) Data frame sent\nI0108 22:09:56.063138    2571 log.go:172] (0xc000b760b0) (0xc000709cc0) Stream removed, broadcasting: 1\nI0108 22:09:56.064336    2571 log.go:172] (0xc000b760b0) (0xc000709d60) Stream removed, broadcasting: 5\nI0108 22:09:56.064373    2571 log.go:172] (0xc000b760b0) Go away received\nI0108 22:09:56.064886    2571 log.go:172] (0xc000b760b0) (0xc000709cc0) Stream removed, broadcasting: 1\nI0108 22:09:56.064915    2571 log.go:172] (0xc000b760b0) (0xc0004ff400) Stream removed, broadcasting: 3\nI0108 22:09:56.064936    2571 log.go:172] (0xc000b760b0) (0xc000709d60) Stream removed, broadcasting: 5\n"
Jan  8 22:09:56.076: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8047.svc.cluster.local\tcanonical name = externalsvc.services-8047.svc.cluster.local.\nName:\texternalsvc.services-8047.svc.cluster.local\nAddress: 10.96.168.52\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-8047, will wait for the garbage collector to delete the pods
Jan  8 22:09:56.142: INFO: Deleting ReplicationController externalsvc took: 9.725427ms
Jan  8 22:09:56.543: INFO: Terminating ReplicationController externalsvc pods took: 400.753014ms
Jan  8 22:10:13.278: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:10:13.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8047" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:33.224 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":181,"skipped":2955,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:10:13.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-6bd35368-2129-4327-842f-ff1d96b03b58 in namespace container-probe-8407
Jan  8 22:10:21.508: INFO: Started pod busybox-6bd35368-2129-4327-842f-ff1d96b03b58 in namespace container-probe-8407
STEP: checking the pod's current state and verifying that restartCount is present
Jan  8 22:10:21.513: INFO: Initial restart count of pod busybox-6bd35368-2129-4327-842f-ff1d96b03b58 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:14:22.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8407" for this suite.

• [SLOW TEST:249.474 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2993,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:14:22.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:14:23.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112" in namespace "projected-4287" to be "success or failure"
Jan  8 22:14:23.101: INFO: Pod "downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112": Phase="Pending", Reason="", readiness=false. Elapsed: 29.627504ms
Jan  8 22:14:25.104: INFO: Pod "downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033340308s
Jan  8 22:14:27.127: INFO: Pod "downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055840784s
Jan  8 22:14:29.163: INFO: Pod "downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09212153s
Jan  8 22:14:31.170: INFO: Pod "downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09897866s
Jan  8 22:14:33.180: INFO: Pod "downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112": Phase="Pending", Reason="", readiness=false. Elapsed: 10.108727727s
Jan  8 22:14:35.185: INFO: Pod "downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.114333696s
STEP: Saw pod success
Jan  8 22:14:35.185: INFO: Pod "downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112" satisfied condition "success or failure"
Jan  8 22:14:35.190: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112 container client-container: 
STEP: delete the pod
Jan  8 22:14:35.244: INFO: Waiting for pod downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112 to disappear
Jan  8 22:14:35.258: INFO: Pod downwardapi-volume-3ea40ae1-05d9-409b-9b5c-ca2407e82112 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:14:35.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4287" for this suite.

• [SLOW TEST:12.453 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3009,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:14:35.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:14:43.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6920" for this suite.

• [SLOW TEST:8.209 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3014,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:14:43.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:14:43.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2940" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":185,"skipped":3019,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:14:43.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:15:00.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7394" for this suite.

• [SLOW TEST:17.001 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":186,"skipped":3020,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:15:00.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:15:00.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan  8 22:15:03.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1787 create -f -'
Jan  8 22:15:06.397: INFO: stderr: ""
Jan  8 22:15:06.397: INFO: stdout: "e2e-test-crd-publish-openapi-2039-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan  8 22:15:06.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1787 delete e2e-test-crd-publish-openapi-2039-crds test-cr'
Jan  8 22:15:06.550: INFO: stderr: ""
Jan  8 22:15:06.550: INFO: stdout: "e2e-test-crd-publish-openapi-2039-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan  8 22:15:06.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1787 apply -f -'
Jan  8 22:15:06.969: INFO: stderr: ""
Jan  8 22:15:06.969: INFO: stdout: "e2e-test-crd-publish-openapi-2039-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan  8 22:15:06.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1787 delete e2e-test-crd-publish-openapi-2039-crds test-cr'
Jan  8 22:15:07.112: INFO: stderr: ""
Jan  8 22:15:07.112: INFO: stdout: "e2e-test-crd-publish-openapi-2039-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan  8 22:15:07.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2039-crds'
Jan  8 22:15:07.564: INFO: stderr: ""
Jan  8 22:15:07.564: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2039-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:15:10.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1787" for this suite.

• [SLOW TEST:9.920 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":187,"skipped":3034,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:15:10.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6539.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6539.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6539.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6539.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6539.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6539.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 22:15:20.752: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:20.756: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:20.759: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:20.762: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:20.783: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:20.787: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:20.790: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:20.793: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:20.799: INFO: Lookups using dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local]

Jan  8 22:15:25.807: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:25.824: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:25.834: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:25.840: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:25.864: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:25.870: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:25.878: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:25.885: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:25.897: INFO: Lookups using dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local]

Jan  8 22:15:30.808: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:30.813: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:30.817: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:30.820: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:30.830: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:30.833: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:30.836: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:30.839: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:30.845: INFO: Lookups using dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local]

Jan  8 22:15:35.837: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:35.862: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:35.868: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:35.873: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:35.889: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:35.892: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:35.895: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:35.899: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:35.905: INFO: Lookups using dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local]

Jan  8 22:15:40.807: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:40.812: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:40.816: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:40.820: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:40.835: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:40.841: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:40.847: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:40.852: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:40.861: INFO: Lookups using dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local]

Jan  8 22:15:45.807: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:45.812: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:45.815: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:45.819: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:45.829: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:45.832: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:45.846: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:45.851: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local from pod dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e: the server could not find the requested resource (get pods dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e)
Jan  8 22:15:45.861: INFO: Lookups using dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6539.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6539.svc.cluster.local jessie_udp@dns-test-service-2.dns-6539.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6539.svc.cluster.local]

Jan  8 22:15:50.897: INFO: DNS probes using dns-6539/dns-test-dd12dcc2-3e9d-4704-bf4c-6ba71103891e succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:15:51.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6539" for this suite.

• [SLOW TEST:40.681 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":188,"skipped":3041,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:15:51.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-50630763-dc35-43c2-86ab-1d6e82859cd3
STEP: Creating a pod to test consume configMaps
Jan  8 22:15:51.500: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8" in namespace "configmap-6368" to be "success or failure"
Jan  8 22:15:51.554: INFO: Pod "pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 54.360322ms
Jan  8 22:15:53.562: INFO: Pod "pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062186564s
Jan  8 22:15:55.574: INFO: Pod "pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073939134s
Jan  8 22:15:57.581: INFO: Pod "pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081428167s
Jan  8 22:15:59.589: INFO: Pod "pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088999066s
Jan  8 22:16:01.596: INFO: Pod "pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095900957s
STEP: Saw pod success
Jan  8 22:16:01.596: INFO: Pod "pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8" satisfied condition "success or failure"
Jan  8 22:16:01.599: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8 container configmap-volume-test: 
STEP: delete the pod
Jan  8 22:16:01.673: INFO: Waiting for pod pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8 to disappear
Jan  8 22:16:01.688: INFO: Pod pod-configmaps-f1260f2f-d3cf-4ce1-9dd8-2e822fad1ad8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:16:01.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6368" for this suite.

• [SLOW TEST:10.495 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3082,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:16:01.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:16:02.651: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 22:16:04.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:16:06.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:16:08.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118562, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:16:11.796: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:16:11.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4802" for this suite.
STEP: Destroying namespace "webhook-4802-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.417 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":190,"skipped":3107,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:16:12.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan  8 22:16:12.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8489'
Jan  8 22:16:12.828: INFO: stderr: ""
Jan  8 22:16:12.828: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 22:16:12.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8489'
Jan  8 22:16:13.085: INFO: stderr: ""
Jan  8 22:16:13.085: INFO: stdout: "update-demo-nautilus-jvjxx update-demo-nautilus-n7dmt "
Jan  8 22:16:13.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvjxx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:13.194: INFO: stderr: ""
Jan  8 22:16:13.194: INFO: stdout: ""
Jan  8 22:16:13.194: INFO: update-demo-nautilus-jvjxx is created but not running
Jan  8 22:16:18.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8489'
Jan  8 22:16:18.715: INFO: stderr: ""
Jan  8 22:16:18.715: INFO: stdout: "update-demo-nautilus-jvjxx update-demo-nautilus-n7dmt "
Jan  8 22:16:18.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvjxx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:19.493: INFO: stderr: ""
Jan  8 22:16:19.494: INFO: stdout: ""
Jan  8 22:16:19.494: INFO: update-demo-nautilus-jvjxx is created but not running
Jan  8 22:16:24.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8489'
Jan  8 22:16:24.702: INFO: stderr: ""
Jan  8 22:16:24.702: INFO: stdout: "update-demo-nautilus-jvjxx update-demo-nautilus-n7dmt "
Jan  8 22:16:24.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvjxx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:24.804: INFO: stderr: ""
Jan  8 22:16:24.804: INFO: stdout: "true"
Jan  8 22:16:24.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvjxx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:24.950: INFO: stderr: ""
Jan  8 22:16:24.950: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 22:16:24.951: INFO: validating pod update-demo-nautilus-jvjxx
Jan  8 22:16:24.956: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 22:16:24.956: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 22:16:24.956: INFO: update-demo-nautilus-jvjxx is verified up and running
Jan  8 22:16:24.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n7dmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:25.080: INFO: stderr: ""
Jan  8 22:16:25.080: INFO: stdout: "true"
Jan  8 22:16:25.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n7dmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:25.230: INFO: stderr: ""
Jan  8 22:16:25.230: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 22:16:25.231: INFO: validating pod update-demo-nautilus-n7dmt
Jan  8 22:16:25.237: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 22:16:25.238: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 22:16:25.238: INFO: update-demo-nautilus-n7dmt is verified up and running
STEP: scaling down the replication controller
Jan  8 22:16:25.241: INFO: scanned /root for discovery docs: 
Jan  8 22:16:25.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8489'
Jan  8 22:16:26.421: INFO: stderr: ""
Jan  8 22:16:26.421: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 22:16:26.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8489'
Jan  8 22:16:26.652: INFO: stderr: ""
Jan  8 22:16:26.652: INFO: stdout: "update-demo-nautilus-jvjxx update-demo-nautilus-n7dmt "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  8 22:16:31.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8489'
Jan  8 22:16:31.882: INFO: stderr: ""
Jan  8 22:16:31.883: INFO: stdout: "update-demo-nautilus-jvjxx update-demo-nautilus-n7dmt "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  8 22:16:36.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8489'
Jan  8 22:16:37.103: INFO: stderr: ""
Jan  8 22:16:37.103: INFO: stdout: "update-demo-nautilus-jvjxx "
Jan  8 22:16:37.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvjxx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:37.213: INFO: stderr: ""
Jan  8 22:16:37.213: INFO: stdout: "true"
Jan  8 22:16:37.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvjxx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:37.322: INFO: stderr: ""
Jan  8 22:16:37.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 22:16:37.322: INFO: validating pod update-demo-nautilus-jvjxx
Jan  8 22:16:37.326: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 22:16:37.326: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 22:16:37.326: INFO: update-demo-nautilus-jvjxx is verified up and running
STEP: scaling up the replication controller
Jan  8 22:16:37.328: INFO: scanned /root for discovery docs: 
Jan  8 22:16:37.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8489'
Jan  8 22:16:38.511: INFO: stderr: ""
Jan  8 22:16:38.511: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 22:16:38.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8489'
Jan  8 22:16:38.681: INFO: stderr: ""
Jan  8 22:16:38.681: INFO: stdout: "update-demo-nautilus-4k96r update-demo-nautilus-jvjxx "
Jan  8 22:16:38.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4k96r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:38.805: INFO: stderr: ""
Jan  8 22:16:38.805: INFO: stdout: ""
Jan  8 22:16:38.805: INFO: update-demo-nautilus-4k96r is created but not running
Jan  8 22:16:43.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8489'
Jan  8 22:16:44.006: INFO: stderr: ""
Jan  8 22:16:44.007: INFO: stdout: "update-demo-nautilus-4k96r update-demo-nautilus-jvjxx "
Jan  8 22:16:44.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4k96r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:44.140: INFO: stderr: ""
Jan  8 22:16:44.140: INFO: stdout: "true"
Jan  8 22:16:44.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4k96r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:44.270: INFO: stderr: ""
Jan  8 22:16:44.270: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 22:16:44.270: INFO: validating pod update-demo-nautilus-4k96r
Jan  8 22:16:44.276: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 22:16:44.276: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 22:16:44.276: INFO: update-demo-nautilus-4k96r is verified up and running
Jan  8 22:16:44.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvjxx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:44.404: INFO: stderr: ""
Jan  8 22:16:44.404: INFO: stdout: "true"
Jan  8 22:16:44.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvjxx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8489'
Jan  8 22:16:44.603: INFO: stderr: ""
Jan  8 22:16:44.604: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 22:16:44.604: INFO: validating pod update-demo-nautilus-jvjxx
Jan  8 22:16:44.610: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 22:16:44.610: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 22:16:44.610: INFO: update-demo-nautilus-jvjxx is verified up and running
STEP: using delete to clean up resources
Jan  8 22:16:44.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8489'
Jan  8 22:16:44.725: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 22:16:44.725: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  8 22:16:44.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8489'
Jan  8 22:16:44.840: INFO: stderr: "No resources found in kubectl-8489 namespace.\n"
Jan  8 22:16:44.840: INFO: stdout: ""
Jan  8 22:16:44.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8489 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  8 22:16:45.031: INFO: stderr: ""
Jan  8 22:16:45.031: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:16:45.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8489" for this suite.

• [SLOW TEST:32.983 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":191,"skipped":3128,"failed":0}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:16:45.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-pw2t
STEP: Creating a pod to test atomic-volume-subpath
Jan  8 22:16:45.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pw2t" in namespace "subpath-2288" to be "success or failure"
Jan  8 22:16:45.184: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Pending", Reason="", readiness=false. Elapsed: 7.092533ms
Jan  8 22:16:47.190: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013359637s
Jan  8 22:16:49.198: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021909047s
Jan  8 22:16:51.208: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031101592s
Jan  8 22:16:53.213: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036120121s
Jan  8 22:16:55.220: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 10.043340484s
Jan  8 22:16:57.226: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 12.049468131s
Jan  8 22:16:59.232: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 14.0554559s
Jan  8 22:17:01.237: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 16.060635041s
Jan  8 22:17:03.245: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 18.068180004s
Jan  8 22:17:05.252: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 20.075679598s
Jan  8 22:17:07.283: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 22.106088894s
Jan  8 22:17:09.291: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 24.11416321s
Jan  8 22:17:11.297: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 26.120736709s
Jan  8 22:17:13.304: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Running", Reason="", readiness=true. Elapsed: 28.127365736s
Jan  8 22:17:15.324: INFO: Pod "pod-subpath-test-projected-pw2t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.147027961s
STEP: Saw pod success
Jan  8 22:17:15.324: INFO: Pod "pod-subpath-test-projected-pw2t" satisfied condition "success or failure"
Jan  8 22:17:15.333: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-pw2t container test-container-subpath-projected-pw2t: 
STEP: delete the pod
Jan  8 22:17:15.388: INFO: Waiting for pod pod-subpath-test-projected-pw2t to disappear
Jan  8 22:17:15.396: INFO: Pod pod-subpath-test-projected-pw2t no longer exists
STEP: Deleting pod pod-subpath-test-projected-pw2t
Jan  8 22:17:15.396: INFO: Deleting pod "pod-subpath-test-projected-pw2t" in namespace "subpath-2288"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:17:15.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2288" for this suite.

• [SLOW TEST:30.322 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":192,"skipped":3133,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:17:15.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:17:16.329: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 22:17:18.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:17:20.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:17:22.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:17:25.405: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:17:25.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7283" for this suite.
STEP: Destroying namespace "webhook-7283-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.462 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":193,"skipped":3186,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:17:25.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8697
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  8 22:17:25.930: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  8 22:17:58.113: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-8697 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 22:17:58.113: INFO: >>> kubeConfig: /root/.kube/config
I0108 22:17:58.172812       9 log.go:172] (0xc002a962c0) (0xc001058e60) Create stream
I0108 22:17:58.172905       9 log.go:172] (0xc002a962c0) (0xc001058e60) Stream added, broadcasting: 1
I0108 22:17:58.180318       9 log.go:172] (0xc002a962c0) Reply frame received for 1
I0108 22:17:58.180519       9 log.go:172] (0xc002a962c0) (0xc001b27e00) Create stream
I0108 22:17:58.180550       9 log.go:172] (0xc002a962c0) (0xc001b27e00) Stream added, broadcasting: 3
I0108 22:17:58.182760       9 log.go:172] (0xc002a962c0) Reply frame received for 3
I0108 22:17:58.182794       9 log.go:172] (0xc002a962c0) (0xc000877f40) Create stream
I0108 22:17:58.182807       9 log.go:172] (0xc002a962c0) (0xc000877f40) Stream added, broadcasting: 5
I0108 22:17:58.189838       9 log.go:172] (0xc002a962c0) Reply frame received for 5
I0108 22:17:58.293893       9 log.go:172] (0xc002a962c0) Data frame received for 3
I0108 22:17:58.294119       9 log.go:172] (0xc001b27e00) (3) Data frame handling
I0108 22:17:58.294215       9 log.go:172] (0xc001b27e00) (3) Data frame sent
I0108 22:17:58.399842       9 log.go:172] (0xc002a962c0) (0xc001b27e00) Stream removed, broadcasting: 3
I0108 22:17:58.400141       9 log.go:172] (0xc002a962c0) Data frame received for 1
I0108 22:17:58.400322       9 log.go:172] (0xc002a962c0) (0xc000877f40) Stream removed, broadcasting: 5
I0108 22:17:58.400372       9 log.go:172] (0xc001058e60) (1) Data frame handling
I0108 22:17:58.400396       9 log.go:172] (0xc001058e60) (1) Data frame sent
I0108 22:17:58.400414       9 log.go:172] (0xc002a962c0) (0xc001058e60) Stream removed, broadcasting: 1
I0108 22:17:58.400430       9 log.go:172] (0xc002a962c0) Go away received
I0108 22:17:58.401288       9 log.go:172] (0xc002a962c0) (0xc001058e60) Stream removed, broadcasting: 1
I0108 22:17:58.401324       9 log.go:172] (0xc002a962c0) (0xc001b27e00) Stream removed, broadcasting: 3
I0108 22:17:58.401332       9 log.go:172] (0xc002a962c0) (0xc000877f40) Stream removed, broadcasting: 5
Jan  8 22:17:58.401: INFO: Waiting for responses: map[]
Jan  8 22:17:58.408: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-8697 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 22:17:58.408: INFO: >>> kubeConfig: /root/.kube/config
I0108 22:17:58.456140       9 log.go:172] (0xc002932b00) (0xc000d0a640) Create stream
I0108 22:17:58.456270       9 log.go:172] (0xc002932b00) (0xc000d0a640) Stream added, broadcasting: 1
I0108 22:17:58.461703       9 log.go:172] (0xc002932b00) Reply frame received for 1
I0108 22:17:58.461744       9 log.go:172] (0xc002932b00) (0xc000d0a8c0) Create stream
I0108 22:17:58.461751       9 log.go:172] (0xc002932b00) (0xc000d0a8c0) Stream added, broadcasting: 3
I0108 22:17:58.463186       9 log.go:172] (0xc002932b00) Reply frame received for 3
I0108 22:17:58.463237       9 log.go:172] (0xc002932b00) (0xc000f26820) Create stream
I0108 22:17:58.463258       9 log.go:172] (0xc002932b00) (0xc000f26820) Stream added, broadcasting: 5
I0108 22:17:58.464912       9 log.go:172] (0xc002932b00) Reply frame received for 5
I0108 22:17:58.565995       9 log.go:172] (0xc002932b00) Data frame received for 3
I0108 22:17:58.566114       9 log.go:172] (0xc000d0a8c0) (3) Data frame handling
I0108 22:17:58.566148       9 log.go:172] (0xc000d0a8c0) (3) Data frame sent
I0108 22:17:58.654380       9 log.go:172] (0xc002932b00) Data frame received for 1
I0108 22:17:58.654471       9 log.go:172] (0xc002932b00) (0xc000d0a8c0) Stream removed, broadcasting: 3
I0108 22:17:58.654670       9 log.go:172] (0xc000d0a640) (1) Data frame handling
I0108 22:17:58.654715       9 log.go:172] (0xc000d0a640) (1) Data frame sent
I0108 22:17:58.654752       9 log.go:172] (0xc002932b00) (0xc000f26820) Stream removed, broadcasting: 5
I0108 22:17:58.654892       9 log.go:172] (0xc002932b00) (0xc000d0a640) Stream removed, broadcasting: 1
I0108 22:17:58.654911       9 log.go:172] (0xc002932b00) Go away received
I0108 22:17:58.655482       9 log.go:172] (0xc002932b00) (0xc000d0a640) Stream removed, broadcasting: 1
I0108 22:17:58.655503       9 log.go:172] (0xc002932b00) (0xc000d0a8c0) Stream removed, broadcasting: 3
I0108 22:17:58.655510       9 log.go:172] (0xc002932b00) (0xc000f26820) Stream removed, broadcasting: 5
Jan  8 22:17:58.655: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:17:58.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8697" for this suite.

• [SLOW TEST:32.777 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3201,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:17:58.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:17:59.886: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 22:18:01.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:18:04.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:18:05.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:18:08.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:18:09.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:18:12.923: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jan  8 22:18:20.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5993 to-be-attached-pod -i -c=container1'
Jan  8 22:18:21.200: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:18:21.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5993" for this suite.
STEP: Destroying namespace "webhook-5993-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.672 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":195,"skipped":3228,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:18:21.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan  8 22:18:21.391: INFO: Waiting up to 5m0s for pod "downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3" in namespace "downward-api-9462" to be "success or failure"
Jan  8 22:18:21.398: INFO: Pod "downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.857616ms
Jan  8 22:18:23.407: INFO: Pod "downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015864475s
Jan  8 22:18:25.413: INFO: Pod "downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022004197s
Jan  8 22:18:27.420: INFO: Pod "downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029227045s
Jan  8 22:18:29.428: INFO: Pod "downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037729058s
Jan  8 22:18:31.436: INFO: Pod "downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045762638s
STEP: Saw pod success
Jan  8 22:18:31.437: INFO: Pod "downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3" satisfied condition "success or failure"
Jan  8 22:18:31.441: INFO: Trying to get logs from node jerma-node pod downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3 container dapi-container: 
STEP: delete the pod
Jan  8 22:18:31.515: INFO: Waiting for pod downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3 to disappear
Jan  8 22:18:31.525: INFO: Pod downward-api-57ce14e5-1ed7-49b6-b800-b58859e8dff3 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:18:31.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9462" for this suite.

• [SLOW TEST:10.200 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3255,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:18:31.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  8 22:18:40.707: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:18:40.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2226" for this suite.

• [SLOW TEST:9.213 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3265,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:18:40.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  8 22:18:40.864: INFO: Waiting up to 5m0s for pod "pod-52f7128a-54fc-4bc4-821f-1528a333ca9e" in namespace "emptydir-4258" to be "success or failure"
Jan  8 22:18:40.883: INFO: Pod "pod-52f7128a-54fc-4bc4-821f-1528a333ca9e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.17305ms
Jan  8 22:18:42.896: INFO: Pod "pod-52f7128a-54fc-4bc4-821f-1528a333ca9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031525139s
Jan  8 22:18:44.908: INFO: Pod "pod-52f7128a-54fc-4bc4-821f-1528a333ca9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043408651s
Jan  8 22:18:46.915: INFO: Pod "pod-52f7128a-54fc-4bc4-821f-1528a333ca9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050829693s
Jan  8 22:18:48.930: INFO: Pod "pod-52f7128a-54fc-4bc4-821f-1528a333ca9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065896065s
STEP: Saw pod success
Jan  8 22:18:48.930: INFO: Pod "pod-52f7128a-54fc-4bc4-821f-1528a333ca9e" satisfied condition "success or failure"
Jan  8 22:18:48.935: INFO: Trying to get logs from node jerma-node pod pod-52f7128a-54fc-4bc4-821f-1528a333ca9e container test-container: 
STEP: delete the pod
Jan  8 22:18:48.983: INFO: Waiting for pod pod-52f7128a-54fc-4bc4-821f-1528a333ca9e to disappear
Jan  8 22:18:49.004: INFO: Pod pod-52f7128a-54fc-4bc4-821f-1528a333ca9e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:18:49.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4258" for this suite.

• [SLOW TEST:8.266 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3267,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:18:49.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-15cf4711-7462-423b-af8a-ecf59924ae6c
STEP: Creating a pod to test consume configMaps
Jan  8 22:18:49.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1" in namespace "projected-7931" to be "success or failure"
Jan  8 22:18:49.159: INFO: Pod "pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.728673ms
Jan  8 22:18:51.165: INFO: Pod "pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018013978s
Jan  8 22:18:53.173: INFO: Pod "pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026181225s
Jan  8 22:18:55.183: INFO: Pod "pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036054516s
Jan  8 22:18:57.191: INFO: Pod "pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04383855s
STEP: Saw pod success
Jan  8 22:18:57.191: INFO: Pod "pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1" satisfied condition "success or failure"
Jan  8 22:18:57.195: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 22:18:57.411: INFO: Waiting for pod pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1 to disappear
Jan  8 22:18:57.425: INFO: Pod pod-projected-configmaps-46c7dff6-a905-47ad-b5b1-d1df61f829e1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:18:57.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7931" for this suite.

• [SLOW TEST:8.419 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3281,"failed":0}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:18:57.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:18:57.616: INFO: Create a RollingUpdate DaemonSet
Jan  8 22:18:57.620: INFO: Check that daemon pods launch on every node of the cluster
Jan  8 22:18:57.657: INFO: Number of nodes with available pods: 0
Jan  8 22:18:57.657: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:18:59.470: INFO: Number of nodes with available pods: 0
Jan  8 22:18:59.470: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:18:59.788: INFO: Number of nodes with available pods: 0
Jan  8 22:18:59.788: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:19:00.671: INFO: Number of nodes with available pods: 0
Jan  8 22:19:00.671: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:19:01.683: INFO: Number of nodes with available pods: 0
Jan  8 22:19:01.683: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:19:03.440: INFO: Number of nodes with available pods: 0
Jan  8 22:19:03.441: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:19:04.581: INFO: Number of nodes with available pods: 0
Jan  8 22:19:04.581: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:19:04.750: INFO: Number of nodes with available pods: 0
Jan  8 22:19:04.750: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:19:05.700: INFO: Number of nodes with available pods: 1
Jan  8 22:19:05.700: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  8 22:19:06.671: INFO: Number of nodes with available pods: 1
Jan  8 22:19:06.672: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan  8 22:19:07.671: INFO: Number of nodes with available pods: 2
Jan  8 22:19:07.671: INFO: Number of running nodes: 2, number of available pods: 2
Jan  8 22:19:07.671: INFO: Update the DaemonSet to trigger a rollout
Jan  8 22:19:07.681: INFO: Updating DaemonSet daemon-set
Jan  8 22:19:23.716: INFO: Roll back the DaemonSet before rollout is complete
Jan  8 22:19:23.724: INFO: Updating DaemonSet daemon-set
Jan  8 22:19:23.724: INFO: Make sure DaemonSet rollback is complete
Jan  8 22:19:24.402: INFO: Wrong image for pod: daemon-set-vw4t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan  8 22:19:24.403: INFO: Pod daemon-set-vw4t8 is not available
Jan  8 22:19:25.484: INFO: Wrong image for pod: daemon-set-vw4t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan  8 22:19:25.484: INFO: Pod daemon-set-vw4t8 is not available
Jan  8 22:19:26.419: INFO: Wrong image for pod: daemon-set-vw4t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan  8 22:19:26.419: INFO: Pod daemon-set-vw4t8 is not available
Jan  8 22:19:27.419: INFO: Wrong image for pod: daemon-set-vw4t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan  8 22:19:27.419: INFO: Pod daemon-set-vw4t8 is not available
Jan  8 22:19:28.422: INFO: Wrong image for pod: daemon-set-vw4t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan  8 22:19:28.422: INFO: Pod daemon-set-vw4t8 is not available
Jan  8 22:19:29.419: INFO: Wrong image for pod: daemon-set-vw4t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan  8 22:19:29.419: INFO: Pod daemon-set-vw4t8 is not available
Jan  8 22:19:30.419: INFO: Wrong image for pod: daemon-set-vw4t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan  8 22:19:30.420: INFO: Pod daemon-set-vw4t8 is not available
Jan  8 22:19:31.421: INFO: Wrong image for pod: daemon-set-vw4t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan  8 22:19:31.421: INFO: Pod daemon-set-vw4t8 is not available
Jan  8 22:19:32.417: INFO: Wrong image for pod: daemon-set-vw4t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan  8 22:19:32.417: INFO: Pod daemon-set-vw4t8 is not available
Jan  8 22:19:33.425: INFO: Pod daemon-set-7ngb5 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3227, will wait for the garbage collector to delete the pods
Jan  8 22:19:33.512: INFO: Deleting DaemonSet.extensions daemon-set took: 8.623231ms
Jan  8 22:19:34.613: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.100496663s
Jan  8 22:19:39.128: INFO: Number of nodes with available pods: 0
Jan  8 22:19:39.128: INFO: Number of running nodes: 0, number of available pods: 0
Jan  8 22:19:39.134: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3227/daemonsets","resourceVersion":"900076"},"items":null}

Jan  8 22:19:39.138: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3227/pods","resourceVersion":"900076"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:19:39.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3227" for this suite.

• [SLOW TEST:41.726 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":200,"skipped":3286,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:19:39.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-72648ea3-ed27-4500-9813-32593233fd6e
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:19:39.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5094" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":201,"skipped":3301,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:19:39.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  8 22:19:39.754: INFO: Waiting up to 5m0s for pod "pod-918103da-8943-4c04-a821-6359a6ca0bd2" in namespace "emptydir-8785" to be "success or failure"
Jan  8 22:19:39.800: INFO: Pod "pod-918103da-8943-4c04-a821-6359a6ca0bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 45.419851ms
Jan  8 22:19:41.808: INFO: Pod "pod-918103da-8943-4c04-a821-6359a6ca0bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053900897s
Jan  8 22:19:43.816: INFO: Pod "pod-918103da-8943-4c04-a821-6359a6ca0bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061781299s
Jan  8 22:19:45.828: INFO: Pod "pod-918103da-8943-4c04-a821-6359a6ca0bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074052765s
Jan  8 22:19:47.835: INFO: Pod "pod-918103da-8943-4c04-a821-6359a6ca0bd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08086062s
STEP: Saw pod success
Jan  8 22:19:47.835: INFO: Pod "pod-918103da-8943-4c04-a821-6359a6ca0bd2" satisfied condition "success or failure"
Jan  8 22:19:47.840: INFO: Trying to get logs from node jerma-node pod pod-918103da-8943-4c04-a821-6359a6ca0bd2 container test-container: 
STEP: delete the pod
Jan  8 22:19:47.894: INFO: Waiting for pod pod-918103da-8943-4c04-a821-6359a6ca0bd2 to disappear
Jan  8 22:19:47.902: INFO: Pod pod-918103da-8943-4c04-a821-6359a6ca0bd2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:19:47.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8785" for this suite.

• [SLOW TEST:8.540 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3316,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:19:47.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:19:48.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4" in namespace "projected-5162" to be "success or failure"
Jan  8 22:19:48.020: INFO: Pod "downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10471ms
Jan  8 22:19:50.031: INFO: Pod "downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016844116s
Jan  8 22:19:52.044: INFO: Pod "downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030186061s
Jan  8 22:19:54.053: INFO: Pod "downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03904414s
Jan  8 22:19:56.062: INFO: Pod "downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047605257s
STEP: Saw pod success
Jan  8 22:19:56.062: INFO: Pod "downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4" satisfied condition "success or failure"
Jan  8 22:19:56.067: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4 container client-container: 
STEP: delete the pod
Jan  8 22:19:56.177: INFO: Waiting for pod downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4 to disappear
Jan  8 22:19:56.184: INFO: Pod downwardapi-volume-a5ffe3a3-c4c7-4489-a028-558adde1a2b4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:19:56.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5162" for this suite.

• [SLOW TEST:8.296 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3317,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:19:56.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Jan  8 22:19:56.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  8 22:19:56.596: INFO: stderr: ""
Jan  8 22:19:56.596: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:19:56.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2387" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":204,"skipped":3342,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:19:56.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  8 22:20:03.591: INFO: 10 pods remaining
Jan  8 22:20:03.591: INFO: 0 pods has nil DeletionTimestamp
Jan  8 22:20:03.591: INFO: 
STEP: Gathering metrics
W0108 22:20:04.449060       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 22:20:04.449: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:20:04.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6465" for this suite.

• [SLOW TEST:8.102 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":205,"skipped":3373,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:20:04.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jan  8 22:20:05.283: INFO: namespace kubectl-1471
Jan  8 22:20:05.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1471'
Jan  8 22:20:05.777: INFO: stderr: ""
Jan  8 22:20:05.777: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan  8 22:20:06.834: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:06.834: INFO: Found 0 / 1
Jan  8 22:20:07.787: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:07.787: INFO: Found 0 / 1
Jan  8 22:20:09.564: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:09.564: INFO: Found 0 / 1
Jan  8 22:20:11.199: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:11.199: INFO: Found 0 / 1
Jan  8 22:20:12.527: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:12.527: INFO: Found 0 / 1
Jan  8 22:20:13.543: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:13.543: INFO: Found 0 / 1
Jan  8 22:20:14.055: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:14.055: INFO: Found 0 / 1
Jan  8 22:20:14.784: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:14.784: INFO: Found 0 / 1
Jan  8 22:20:15.787: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:15.787: INFO: Found 0 / 1
Jan  8 22:20:16.781: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:16.781: INFO: Found 0 / 1
Jan  8 22:20:17.786: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:17.786: INFO: Found 0 / 1
Jan  8 22:20:18.784: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:18.784: INFO: Found 1 / 1
Jan  8 22:20:18.784: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  8 22:20:18.808: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:20:18.808: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  8 22:20:18.808: INFO: wait on agnhost-master startup in kubectl-1471 
Jan  8 22:20:18.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-m6fjl agnhost-master --namespace=kubectl-1471'
Jan  8 22:20:19.056: INFO: stderr: ""
Jan  8 22:20:19.056: INFO: stdout: "Paused\n"
STEP: exposing RC
Jan  8 22:20:19.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1471'
Jan  8 22:20:19.387: INFO: stderr: ""
Jan  8 22:20:19.388: INFO: stdout: "service/rm2 exposed\n"
Jan  8 22:20:19.422: INFO: Service rm2 in namespace kubectl-1471 found.
STEP: exposing service
Jan  8 22:20:21.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1471'
Jan  8 22:20:21.636: INFO: stderr: ""
Jan  8 22:20:21.636: INFO: stdout: "service/rm3 exposed\n"
Jan  8 22:20:21.645: INFO: Service rm3 in namespace kubectl-1471 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:20:23.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1471" for this suite.

• [SLOW TEST:18.898 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":206,"skipped":3385,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:20:23.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:20:24.454: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan  8 22:20:26.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:20:28.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:20:30.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:20:32.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714118824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:20:35.506: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:20:35.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:20:36.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5926" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:13.259 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":207,"skipped":3399,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:20:36.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:20:54.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1860" for this suite.

• [SLOW TEST:17.258 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":208,"skipped":3428,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:20:54.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-3154249f-80f4-48c9-8bbe-a88907f1b9df in namespace container-probe-9190
Jan  8 22:21:02.353: INFO: Started pod test-webserver-3154249f-80f4-48c9-8bbe-a88907f1b9df in namespace container-probe-9190
STEP: checking the pod's current state and verifying that restartCount is present
Jan  8 22:21:02.356: INFO: Initial restart count of pod test-webserver-3154249f-80f4-48c9-8bbe-a88907f1b9df is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:25:04.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9190" for this suite.

• [SLOW TEST:250.022 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3490,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:25:04.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  8 22:25:13.045: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b7fb2260-f92b-423d-8c8c-9ae6a12440af"
Jan  8 22:25:13.045: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b7fb2260-f92b-423d-8c8c-9ae6a12440af" in namespace "pods-549" to be "terminated due to deadline exceeded"
Jan  8 22:25:13.100: INFO: Pod "pod-update-activedeadlineseconds-b7fb2260-f92b-423d-8c8c-9ae6a12440af": Phase="Running", Reason="", readiness=true. Elapsed: 54.171789ms
Jan  8 22:25:15.113: INFO: Pod "pod-update-activedeadlineseconds-b7fb2260-f92b-423d-8c8c-9ae6a12440af": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.067114258s
Jan  8 22:25:15.113: INFO: Pod "pod-update-activedeadlineseconds-b7fb2260-f92b-423d-8c8c-9ae6a12440af" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:25:15.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-549" for this suite.

• [SLOW TEST:10.894 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:25:15.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:25:50.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4436" for this suite.
STEP: Destroying namespace "nsdeletetest-137" for this suite.
Jan  8 22:25:50.639: INFO: Namespace nsdeletetest-137 was already deleted
STEP: Destroying namespace "nsdeletetest-5353" for this suite.

• [SLOW TEST:35.516 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":211,"skipped":3555,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:25:50.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:25:50.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01" in namespace "downward-api-5246" to be "success or failure"
Jan  8 22:25:50.772: INFO: Pod "downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01": Phase="Pending", Reason="", readiness=false. Elapsed: 16.939304ms
Jan  8 22:25:52.779: INFO: Pod "downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024596357s
Jan  8 22:25:54.787: INFO: Pod "downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031758016s
Jan  8 22:25:56.795: INFO: Pod "downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040253249s
Jan  8 22:25:58.805: INFO: Pod "downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050477484s
STEP: Saw pod success
Jan  8 22:25:58.806: INFO: Pod "downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01" satisfied condition "success or failure"
Jan  8 22:25:58.811: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01 container client-container: 
STEP: delete the pod
Jan  8 22:25:58.901: INFO: Waiting for pod downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01 to disappear
Jan  8 22:25:58.907: INFO: Pod downwardapi-volume-ae255d1b-19cb-4748-8549-a3203e522d01 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:25:58.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5246" for this suite.

• [SLOW TEST:8.273 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3573,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:25:58.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan  8 22:25:58.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:26:14.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-441" for this suite.

• [SLOW TEST:15.497 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":213,"skipped":3577,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:26:14.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  8 22:26:14.572: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-a 5d22ea5c-6856-4fb5-ab01-4f4c435bb81d 901432 0 2020-01-08 22:26:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  8 22:26:14.573: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-a 5d22ea5c-6856-4fb5-ab01-4f4c435bb81d 901432 0 2020-01-08 22:26:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  8 22:26:24.610: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-a 5d22ea5c-6856-4fb5-ab01-4f4c435bb81d 901461 0 2020-01-08 22:26:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  8 22:26:24.611: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-a 5d22ea5c-6856-4fb5-ab01-4f4c435bb81d 901461 0 2020-01-08 22:26:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  8 22:26:34.625: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-a 5d22ea5c-6856-4fb5-ab01-4f4c435bb81d 901485 0 2020-01-08 22:26:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  8 22:26:34.625: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-a 5d22ea5c-6856-4fb5-ab01-4f4c435bb81d 901485 0 2020-01-08 22:26:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  8 22:26:44.635: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-a 5d22ea5c-6856-4fb5-ab01-4f4c435bb81d 901509 0 2020-01-08 22:26:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  8 22:26:44.635: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-a 5d22ea5c-6856-4fb5-ab01-4f4c435bb81d 901509 0 2020-01-08 22:26:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  8 22:26:54.653: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-b 503be57c-b0d5-4c08-98f8-990d87cea424 901531 0 2020-01-08 22:26:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  8 22:26:54.653: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-b 503be57c-b0d5-4c08-98f8-990d87cea424 901531 0 2020-01-08 22:26:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  8 22:27:04.666: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-b 503be57c-b0d5-4c08-98f8-990d87cea424 901553 0 2020-01-08 22:26:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  8 22:27:04.667: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4524 /api/v1/namespaces/watch-4524/configmaps/e2e-watch-test-configmap-b 503be57c-b0d5-4c08-98f8-990d87cea424 901553 0 2020-01-08 22:26:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:27:14.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4524" for this suite.

• [SLOW TEST:60.278 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":214,"skipped":3580,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:27:14.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan  8 22:27:14.782: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:27:23.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8734" for this suite.

• [SLOW TEST:9.010 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":215,"skipped":3593,"failed":0}
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:27:23.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:27:24.052: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  8 22:27:24.068: INFO: Number of nodes with available pods: 0
Jan  8 22:27:24.068: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  8 22:27:24.255: INFO: Number of nodes with available pods: 0
Jan  8 22:27:24.255: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:25.275: INFO: Number of nodes with available pods: 0
Jan  8 22:27:25.275: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:26.269: INFO: Number of nodes with available pods: 0
Jan  8 22:27:26.270: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:27.286: INFO: Number of nodes with available pods: 0
Jan  8 22:27:27.286: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:28.263: INFO: Number of nodes with available pods: 0
Jan  8 22:27:28.263: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:29.947: INFO: Number of nodes with available pods: 0
Jan  8 22:27:29.947: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:30.285: INFO: Number of nodes with available pods: 0
Jan  8 22:27:30.285: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:31.267: INFO: Number of nodes with available pods: 0
Jan  8 22:27:31.268: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:32.279: INFO: Number of nodes with available pods: 1
Jan  8 22:27:32.279: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  8 22:27:32.317: INFO: Number of nodes with available pods: 1
Jan  8 22:27:32.317: INFO: Number of running nodes: 0, number of available pods: 1
Jan  8 22:27:33.325: INFO: Number of nodes with available pods: 0
Jan  8 22:27:33.325: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  8 22:27:33.340: INFO: Number of nodes with available pods: 0
Jan  8 22:27:33.340: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:34.357: INFO: Number of nodes with available pods: 0
Jan  8 22:27:34.357: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:35.351: INFO: Number of nodes with available pods: 0
Jan  8 22:27:35.351: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:36.359: INFO: Number of nodes with available pods: 0
Jan  8 22:27:36.359: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:37.350: INFO: Number of nodes with available pods: 0
Jan  8 22:27:37.350: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:38.446: INFO: Number of nodes with available pods: 0
Jan  8 22:27:38.446: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:39.350: INFO: Number of nodes with available pods: 0
Jan  8 22:27:39.350: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:40.351: INFO: Number of nodes with available pods: 0
Jan  8 22:27:40.351: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:41.349: INFO: Number of nodes with available pods: 0
Jan  8 22:27:41.350: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:42.384: INFO: Number of nodes with available pods: 0
Jan  8 22:27:42.384: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:43.348: INFO: Number of nodes with available pods: 0
Jan  8 22:27:43.348: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:44.349: INFO: Number of nodes with available pods: 0
Jan  8 22:27:44.349: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:45.352: INFO: Number of nodes with available pods: 0
Jan  8 22:27:45.352: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:46.347: INFO: Number of nodes with available pods: 0
Jan  8 22:27:46.347: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:47.348: INFO: Number of nodes with available pods: 0
Jan  8 22:27:47.348: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:48.347: INFO: Number of nodes with available pods: 0
Jan  8 22:27:48.348: INFO: Node jerma-node is running more than one daemon pod
Jan  8 22:27:49.349: INFO: Number of nodes with available pods: 1
Jan  8 22:27:49.349: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3452, will wait for the garbage collector to delete the pods
Jan  8 22:27:49.422: INFO: Deleting DaemonSet.extensions daemon-set took: 10.343184ms
Jan  8 22:27:49.722: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.42403ms
Jan  8 22:28:02.428: INFO: Number of nodes with available pods: 0
Jan  8 22:28:02.428: INFO: Number of running nodes: 0, number of available pods: 0
Jan  8 22:28:02.432: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3452/daemonsets","resourceVersion":"901772"},"items":null}

Jan  8 22:28:02.435: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3452/pods","resourceVersion":"901772"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:28:02.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3452" for this suite.

• [SLOW TEST:38.779 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":216,"skipped":3593,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:28:02.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-f6752ffb-203f-49cd-9ec9-4d0d2e576150
STEP: Creating a pod to test consume configMaps
Jan  8 22:28:02.616: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29" in namespace "configmap-5289" to be "success or failure"
Jan  8 22:28:02.649: INFO: Pod "pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29": Phase="Pending", Reason="", readiness=false. Elapsed: 32.955949ms
Jan  8 22:28:04.656: INFO: Pod "pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040003892s
Jan  8 22:28:06.666: INFO: Pod "pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049484515s
Jan  8 22:28:08.674: INFO: Pod "pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057574888s
Jan  8 22:28:10.687: INFO: Pod "pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070490862s
STEP: Saw pod success
Jan  8 22:28:10.687: INFO: Pod "pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29" satisfied condition "success or failure"
Jan  8 22:28:10.692: INFO: Trying to get logs from node jerma-node pod pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29 container configmap-volume-test: 
STEP: delete the pod
Jan  8 22:28:10.779: INFO: Waiting for pod pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29 to disappear
Jan  8 22:28:10.787: INFO: Pod pod-configmaps-3ca4ee9d-892b-46e8-84ce-449f842c5d29 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:28:10.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5289" for this suite.

• [SLOW TEST:8.321 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3604,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:28:10.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9763
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-9763
I0108 22:28:11.020671       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9763, replica count: 2
I0108 22:28:14.071895       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:28:17.072361       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:28:20.072633       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:28:23.073048       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  8 22:28:23.073: INFO: Creating new exec pod
Jan  8 22:28:30.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9763 execpodzlkrl -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan  8 22:28:32.619: INFO: stderr: "I0108 22:28:32.394920    3365 log.go:172] (0xc000a14000) (0xc000a0a320) Create stream\nI0108 22:28:32.395074    3365 log.go:172] (0xc000a14000) (0xc000a0a320) Stream added, broadcasting: 1\nI0108 22:28:32.399969    3365 log.go:172] (0xc000a14000) Reply frame received for 1\nI0108 22:28:32.400028    3365 log.go:172] (0xc000a14000) (0xc000a0a3c0) Create stream\nI0108 22:28:32.400046    3365 log.go:172] (0xc000a14000) (0xc000a0a3c0) Stream added, broadcasting: 3\nI0108 22:28:32.401245    3365 log.go:172] (0xc000a14000) Reply frame received for 3\nI0108 22:28:32.401287    3365 log.go:172] (0xc000a14000) (0xc0005d6000) Create stream\nI0108 22:28:32.401301    3365 log.go:172] (0xc000a14000) (0xc0005d6000) Stream added, broadcasting: 5\nI0108 22:28:32.402843    3365 log.go:172] (0xc000a14000) Reply frame received for 5\nI0108 22:28:32.469720    3365 log.go:172] (0xc000a14000) Data frame received for 5\nI0108 22:28:32.469871    3365 log.go:172] (0xc0005d6000) (5) Data frame handling\nI0108 22:28:32.469911    3365 log.go:172] (0xc0005d6000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0108 22:28:32.478388    3365 log.go:172] (0xc000a14000) Data frame received for 5\nI0108 22:28:32.478831    3365 log.go:172] (0xc0005d6000) (5) Data frame handling\nI0108 22:28:32.478939    3365 log.go:172] (0xc0005d6000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0108 22:28:32.594064    3365 log.go:172] (0xc000a14000) Data frame received for 1\nI0108 22:28:32.594162    3365 log.go:172] (0xc000a0a320) (1) Data frame handling\nI0108 22:28:32.594194    3365 log.go:172] (0xc000a0a320) (1) Data frame sent\nI0108 22:28:32.594726    3365 log.go:172] (0xc000a14000) (0xc000a0a320) Stream removed, broadcasting: 1\nI0108 22:28:32.595482    3365 log.go:172] (0xc000a14000) (0xc000a0a3c0) Stream removed, broadcasting: 3\nI0108 22:28:32.595554    3365 log.go:172] (0xc000a14000) (0xc0005d6000) Stream removed, broadcasting: 5\nI0108 22:28:32.595673    3365 log.go:172] (0xc000a14000) Go away received\nI0108 22:28:32.596397    3365 log.go:172] (0xc000a14000) (0xc000a0a320) Stream removed, broadcasting: 1\nI0108 22:28:32.596424    3365 log.go:172] (0xc000a14000) (0xc000a0a3c0) Stream removed, broadcasting: 3\nI0108 22:28:32.596440    3365 log.go:172] (0xc000a14000) (0xc0005d6000) Stream removed, broadcasting: 5\n"
Jan  8 22:28:32.619: INFO: stdout: ""
Jan  8 22:28:32.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9763 execpodzlkrl -- /bin/sh -x -c nc -zv -t -w 2 10.96.189.161 80'
Jan  8 22:28:32.960: INFO: stderr: "I0108 22:28:32.781903    3398 log.go:172] (0xc00095e0b0) (0xc00024f4a0) Create stream\nI0108 22:28:32.782061    3398 log.go:172] (0xc00095e0b0) (0xc00024f4a0) Stream added, broadcasting: 1\nI0108 22:28:32.784716    3398 log.go:172] (0xc00095e0b0) Reply frame received for 1\nI0108 22:28:32.784758    3398 log.go:172] (0xc00095e0b0) (0xc00097e000) Create stream\nI0108 22:28:32.784779    3398 log.go:172] (0xc00095e0b0) (0xc00097e000) Stream added, broadcasting: 3\nI0108 22:28:32.785822    3398 log.go:172] (0xc00095e0b0) Reply frame received for 3\nI0108 22:28:32.785841    3398 log.go:172] (0xc00095e0b0) (0xc000657a40) Create stream\nI0108 22:28:32.785846    3398 log.go:172] (0xc00095e0b0) (0xc000657a40) Stream added, broadcasting: 5\nI0108 22:28:32.786939    3398 log.go:172] (0xc00095e0b0) Reply frame received for 5\nI0108 22:28:32.844545    3398 log.go:172] (0xc00095e0b0) Data frame received for 5\nI0108 22:28:32.844659    3398 log.go:172] (0xc000657a40) (5) Data frame handling\nI0108 22:28:32.844684    3398 log.go:172] (0xc000657a40) (5) Data frame sent\n+ ncI0108 22:28:32.844927    3398 log.go:172] (0xc00095e0b0) Data frame received for 5\nI0108 22:28:32.844987    3398 log.go:172] (0xc000657a40) (5) Data frame handling\nI0108 22:28:32.844999    3398 log.go:172] (0xc000657a40) (5) Data frame sent\n -zvI0108 22:28:32.845370    3398 log.go:172] (0xc00095e0b0) Data frame received for 5\nI0108 22:28:32.845382    3398 log.go:172] (0xc000657a40) (5) Data frame handling\nI0108 22:28:32.845393    3398 log.go:172] (0xc000657a40) (5) Data frame sent\n -t -w 2 10.96.189.161 80\nI0108 22:28:32.852429    3398 log.go:172] (0xc00095e0b0) Data frame received for 5\nI0108 22:28:32.852518    3398 log.go:172] (0xc000657a40) (5) Data frame handling\nI0108 22:28:32.852545    3398 log.go:172] (0xc000657a40) (5) Data frame sent\nConnection to 10.96.189.161 80 port [tcp/http] succeeded!\nI0108 22:28:32.949083    3398 log.go:172] (0xc00095e0b0) Data frame received for 1\nI0108 22:28:32.949278    3398 log.go:172] (0xc00095e0b0) (0xc00097e000) Stream removed, broadcasting: 3\nI0108 22:28:32.949369    3398 log.go:172] (0xc00024f4a0) (1) Data frame handling\nI0108 22:28:32.949420    3398 log.go:172] (0xc00024f4a0) (1) Data frame sent\nI0108 22:28:32.949489    3398 log.go:172] (0xc00095e0b0) (0xc00024f4a0) Stream removed, broadcasting: 1\nI0108 22:28:32.949996    3398 log.go:172] (0xc00095e0b0) (0xc000657a40) Stream removed, broadcasting: 5\nI0108 22:28:32.950330    3398 log.go:172] (0xc00095e0b0) Go away received\nI0108 22:28:32.950436    3398 log.go:172] (0xc00095e0b0) (0xc00024f4a0) Stream removed, broadcasting: 1\nI0108 22:28:32.950463    3398 log.go:172] (0xc00095e0b0) (0xc00097e000) Stream removed, broadcasting: 3\nI0108 22:28:32.950481    3398 log.go:172] (0xc00095e0b0) (0xc000657a40) Stream removed, broadcasting: 5\n"
Jan  8 22:28:32.960: INFO: stdout: ""
Jan  8 22:28:32.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9763 execpodzlkrl -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30609'
Jan  8 22:28:33.288: INFO: stderr: "I0108 22:28:33.105093    3419 log.go:172] (0xc0008e4630) (0xc0009a0320) Create stream\nI0108 22:28:33.105352    3419 log.go:172] (0xc0008e4630) (0xc0009a0320) Stream added, broadcasting: 1\nI0108 22:28:33.108891    3419 log.go:172] (0xc0008e4630) Reply frame received for 1\nI0108 22:28:33.108927    3419 log.go:172] (0xc0008e4630) (0xc0009a03c0) Create stream\nI0108 22:28:33.108937    3419 log.go:172] (0xc0008e4630) (0xc0009a03c0) Stream added, broadcasting: 3\nI0108 22:28:33.110389    3419 log.go:172] (0xc0008e4630) Reply frame received for 3\nI0108 22:28:33.110412    3419 log.go:172] (0xc0008e4630) (0xc0009a0460) Create stream\nI0108 22:28:33.110419    3419 log.go:172] (0xc0008e4630) (0xc0009a0460) Stream added, broadcasting: 5\nI0108 22:28:33.111840    3419 log.go:172] (0xc0008e4630) Reply frame received for 5\nI0108 22:28:33.201170    3419 log.go:172] (0xc0008e4630) Data frame received for 5\nI0108 22:28:33.201220    3419 log.go:172] (0xc0009a0460) (5) Data frame handling\nI0108 22:28:33.201235    3419 log.go:172] (0xc0009a0460) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30609\nI0108 22:28:33.205061    3419 log.go:172] (0xc0008e4630) Data frame received for 5\nI0108 22:28:33.205116    3419 log.go:172] (0xc0009a0460) (5) Data frame handling\nI0108 22:28:33.205157    3419 log.go:172] (0xc0009a0460) (5) Data frame sent\nConnection to 10.96.2.250 30609 port [tcp/30609] succeeded!\nI0108 22:28:33.275267    3419 log.go:172] (0xc0008e4630) (0xc0009a0460) Stream removed, broadcasting: 5\nI0108 22:28:33.275616    3419 log.go:172] (0xc0008e4630) Data frame received for 1\nI0108 22:28:33.275647    3419 log.go:172] (0xc0009a0320) (1) Data frame handling\nI0108 22:28:33.275679    3419 log.go:172] (0xc0008e4630) (0xc0009a03c0) Stream removed, broadcasting: 3\nI0108 22:28:33.275797    3419 log.go:172] (0xc0009a0320) (1) Data frame sent\nI0108 22:28:33.275815    3419 log.go:172] (0xc0008e4630) (0xc0009a0320) Stream removed, broadcasting: 1\nI0108 22:28:33.275880    3419 log.go:172] (0xc0008e4630) Go away received\nI0108 22:28:33.276776    3419 log.go:172] (0xc0008e4630) (0xc0009a0320) Stream removed, broadcasting: 1\nI0108 22:28:33.276795    3419 log.go:172] (0xc0008e4630) (0xc0009a03c0) Stream removed, broadcasting: 3\nI0108 22:28:33.276800    3419 log.go:172] (0xc0008e4630) (0xc0009a0460) Stream removed, broadcasting: 5\n"
Jan  8 22:28:33.288: INFO: stdout: ""
Jan  8 22:28:33.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9763 execpodzlkrl -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30609'
Jan  8 22:28:33.696: INFO: stderr: "I0108 22:28:33.505145    3440 log.go:172] (0xc000a63550) (0xc000a58640) Create stream\nI0108 22:28:33.505433    3440 log.go:172] (0xc000a63550) (0xc000a58640) Stream added, broadcasting: 1\nI0108 22:28:33.518257    3440 log.go:172] (0xc000a63550) Reply frame received for 1\nI0108 22:28:33.518329    3440 log.go:172] (0xc000a63550) (0xc0007305a0) Create stream\nI0108 22:28:33.518344    3440 log.go:172] (0xc000a63550) (0xc0007305a0) Stream added, broadcasting: 3\nI0108 22:28:33.519950    3440 log.go:172] (0xc000a63550) Reply frame received for 3\nI0108 22:28:33.519987    3440 log.go:172] (0xc000a63550) (0xc00059d360) Create stream\nI0108 22:28:33.519999    3440 log.go:172] (0xc000a63550) (0xc00059d360) Stream added, broadcasting: 5\nI0108 22:28:33.521579    3440 log.go:172] (0xc000a63550) Reply frame received for 5\nI0108 22:28:33.581497    3440 log.go:172] (0xc000a63550) Data frame received for 5\nI0108 22:28:33.581626    3440 log.go:172] (0xc00059d360) (5) Data frame handling\nI0108 22:28:33.581680    3440 log.go:172] (0xc00059d360) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30609\nI0108 22:28:33.582374    3440 log.go:172] (0xc000a63550) Data frame received for 5\nI0108 22:28:33.582390    3440 log.go:172] (0xc00059d360) (5) Data frame handling\nI0108 22:28:33.582409    3440 log.go:172] (0xc00059d360) (5) Data frame sent\nConnection to 10.96.1.234 30609 port [tcp/30609] succeeded!\nI0108 22:28:33.672445    3440 log.go:172] (0xc000a63550) (0xc0007305a0) Stream removed, broadcasting: 3\nI0108 22:28:33.672879    3440 log.go:172] (0xc000a63550) Data frame received for 1\nI0108 22:28:33.673189    3440 log.go:172] (0xc000a63550) (0xc00059d360) Stream removed, broadcasting: 5\nI0108 22:28:33.673289    3440 log.go:172] (0xc000a58640) (1) Data frame handling\nI0108 22:28:33.673345    3440 log.go:172] (0xc000a58640) (1) Data frame sent\nI0108 22:28:33.673465    3440 log.go:172] (0xc000a63550) (0xc000a58640) Stream removed, broadcasting: 1\nI0108 22:28:33.673510    3440 log.go:172] (0xc000a63550) Go away received\nI0108 22:28:33.675379    3440 log.go:172] (0xc000a63550) (0xc000a58640) Stream removed, broadcasting: 1\nI0108 22:28:33.675442    3440 log.go:172] (0xc000a63550) (0xc0007305a0) Stream removed, broadcasting: 3\nI0108 22:28:33.675479    3440 log.go:172] (0xc000a63550) (0xc00059d360) Stream removed, broadcasting: 5\n"
Jan  8 22:28:33.696: INFO: stdout: ""
Jan  8 22:28:33.696: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:28:33.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9763" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.018 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":218,"skipped":3629,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:28:33.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  8 22:28:45.075: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:28:46.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1065" for this suite.

• [SLOW TEST:12.403 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":219,"skipped":3636,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:28:46.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan  8 22:28:46.395: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  8 22:28:46.476: INFO: Waiting for terminating namespaces to be deleted...
Jan  8 22:28:46.480: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan  8 22:28:46.505: INFO: pod-adoption-release from replicaset-1065 started at 2020-01-08 22:28:33 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.505: INFO: 	Container pod-adoption-release ready: true, restart count 0
Jan  8 22:28:46.505: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan  8 22:28:46.505: INFO: 	Container weave ready: true, restart count 1
Jan  8 22:28:46.505: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 22:28:46.505: INFO: pod-adoption-release-8lzq8 from replicaset-1065 started at 2020-01-08 22:28:45 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.505: INFO: 	Container pod-adoption-release ready: false, restart count 0
Jan  8 22:28:46.505: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.505: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 22:28:46.506: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan  8 22:28:46.527: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.527: INFO: 	Container kube-controller-manager ready: true, restart count 1
Jan  8 22:28:46.527: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.527: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 22:28:46.527: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan  8 22:28:46.527: INFO: 	Container weave ready: true, restart count 0
Jan  8 22:28:46.527: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 22:28:46.527: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.527: INFO: 	Container kube-scheduler ready: true, restart count 2
Jan  8 22:28:46.527: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.527: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan  8 22:28:46.527: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.527: INFO: 	Container etcd ready: true, restart count 1
Jan  8 22:28:46.527: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.527: INFO: 	Container coredns ready: true, restart count 0
Jan  8 22:28:46.527: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  8 22:28:46.527: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Jan  8 22:28:46.686: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan  8 22:28:46.686: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan  8 22:28:46.686: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan  8 22:28:46.686: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Jan  8 22:28:46.686: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Jan  8 22:28:46.686: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan  8 22:28:46.686: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Jan  8 22:28:46.686: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan  8 22:28:46.686: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Jan  8 22:28:46.686: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
Jan  8 22:28:46.686: INFO: Pod pod-adoption-release requesting resource cpu=0m on Node jerma-node
Jan  8 22:28:46.686: INFO: Pod pod-adoption-release-8lzq8 requesting resource cpu=0m on Node jerma-node
STEP: Starting Pods to consume most of the cluster CPU.
Jan  8 22:28:46.686: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Jan  8 22:28:46.804: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-596ec4f8-eb78-44b7-87d0-cb292f91d835.15e809dff1aa56ef], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2980/filler-pod-596ec4f8-eb78-44b7-87d0-cb292f91d835 to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-596ec4f8-eb78-44b7-87d0-cb292f91d835.15e809e0d2815346], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-596ec4f8-eb78-44b7-87d0-cb292f91d835.15e809e18e7cf66e], Reason = [Created], Message = [Created container filler-pod-596ec4f8-eb78-44b7-87d0-cb292f91d835]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-596ec4f8-eb78-44b7-87d0-cb292f91d835.15e809e1b7890145], Reason = [Started], Message = [Started container filler-pod-596ec4f8-eb78-44b7-87d0-cb292f91d835]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e12e526a-c2b0-4be0-94fa-39589cc513a7.15e809dff0d06c60], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2980/filler-pod-e12e526a-c2b0-4be0-94fa-39589cc513a7 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e12e526a-c2b0-4be0-94fa-39589cc513a7.15e809e162e9864b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e12e526a-c2b0-4be0-94fa-39589cc513a7.15e809e2498c2a80], Reason = [Created], Message = [Created container filler-pod-e12e526a-c2b0-4be0-94fa-39589cc513a7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e12e526a-c2b0-4be0-94fa-39589cc513a7.15e809e28fc96119], Reason = [Started], Message = [Started container filler-pod-e12e526a-c2b0-4be0-94fa-39589cc513a7]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e809e336c6e2b3], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e809e338b89768], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:29:01.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2980" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:15.762 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":220,"skipped":3652,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:29:01.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan  8 22:29:02.048: INFO: >>> kubeConfig: /root/.kube/config
Jan  8 22:29:05.511: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:29:18.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2834" for this suite.

• [SLOW TEST:16.224 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":221,"skipped":3671,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:29:18.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  8 22:29:18.324: INFO: Waiting up to 5m0s for pod "pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199" in namespace "emptydir-1736" to be "success or failure"
Jan  8 22:29:18.335: INFO: Pod "pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199": Phase="Pending", Reason="", readiness=false. Elapsed: 10.178822ms
Jan  8 22:29:20.343: INFO: Pod "pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018974051s
Jan  8 22:29:22.349: INFO: Pod "pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024889943s
Jan  8 22:29:24.359: INFO: Pod "pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03488714s
Jan  8 22:29:26.366: INFO: Pod "pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041710494s
STEP: Saw pod success
Jan  8 22:29:26.366: INFO: Pod "pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199" satisfied condition "success or failure"
Jan  8 22:29:26.370: INFO: Trying to get logs from node jerma-node pod pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199 container test-container: 
STEP: delete the pod
Jan  8 22:29:26.425: INFO: Waiting for pod pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199 to disappear
Jan  8 22:29:26.439: INFO: Pod pod-b361d390-1da8-4b4e-b171-1b9cf0ee8199 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:29:26.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1736" for this suite.

• [SLOW TEST:8.234 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3680,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:29:26.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-42c079e9-5d32-48fb-a0a1-be4a8b715af9
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-42c079e9-5d32-48fb-a0a1-be4a8b715af9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:31:00.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1273" for this suite.

• [SLOW TEST:93.695 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3759,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:31:00.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:31:00.320: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06" in namespace "downward-api-3982" to be "success or failure"
Jan  8 22:31:00.333: INFO: Pod "downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06": Phase="Pending", Reason="", readiness=false. Elapsed: 12.863273ms
Jan  8 22:31:02.338: INFO: Pod "downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017097105s
Jan  8 22:31:04.408: INFO: Pod "downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08721298s
Jan  8 22:31:06.414: INFO: Pod "downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092947961s
Jan  8 22:31:08.453: INFO: Pod "downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132838309s
Jan  8 22:31:10.466: INFO: Pod "downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.144979837s
STEP: Saw pod success
Jan  8 22:31:10.466: INFO: Pod "downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06" satisfied condition "success or failure"
Jan  8 22:31:10.473: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06 container client-container: 
STEP: delete the pod
Jan  8 22:31:10.569: INFO: Waiting for pod downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06 to disappear
Jan  8 22:31:10.585: INFO: Pod downwardapi-volume-c128c2e6-808f-4479-a1fc-bfac57889a06 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:31:10.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3982" for this suite.

• [SLOW TEST:10.455 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3768,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:31:10.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan  8 22:31:21.594: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2559 PodName:pod-sharedvolume-1538b55e-d5ac-4402-89dc-9da3f29dd3b9 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 22:31:21.595: INFO: >>> kubeConfig: /root/.kube/config
I0108 22:31:21.662899       9 log.go:172] (0xc002ad53f0) (0xc0001a3a40) Create stream
I0108 22:31:21.662961       9 log.go:172] (0xc002ad53f0) (0xc0001a3a40) Stream added, broadcasting: 1
I0108 22:31:21.667355       9 log.go:172] (0xc002ad53f0) Reply frame received for 1
I0108 22:31:21.667406       9 log.go:172] (0xc002ad53f0) (0xc0008ca0a0) Create stream
I0108 22:31:21.667419       9 log.go:172] (0xc002ad53f0) (0xc0008ca0a0) Stream added, broadcasting: 3
I0108 22:31:21.669810       9 log.go:172] (0xc002ad53f0) Reply frame received for 3
I0108 22:31:21.669870       9 log.go:172] (0xc002ad53f0) (0xc0016d4000) Create stream
I0108 22:31:21.669896       9 log.go:172] (0xc002ad53f0) (0xc0016d4000) Stream added, broadcasting: 5
I0108 22:31:21.674163       9 log.go:172] (0xc002ad53f0) Reply frame received for 5
I0108 22:31:21.781412       9 log.go:172] (0xc002ad53f0) Data frame received for 3
I0108 22:31:21.781504       9 log.go:172] (0xc0008ca0a0) (3) Data frame handling
I0108 22:31:21.781543       9 log.go:172] (0xc0008ca0a0) (3) Data frame sent
I0108 22:31:21.900779       9 log.go:172] (0xc002ad53f0) Data frame received for 1
I0108 22:31:21.900977       9 log.go:172] (0xc0001a3a40) (1) Data frame handling
I0108 22:31:21.901058       9 log.go:172] (0xc0001a3a40) (1) Data frame sent
I0108 22:31:21.903698       9 log.go:172] (0xc002ad53f0) (0xc0016d4000) Stream removed, broadcasting: 5
I0108 22:31:21.904011       9 log.go:172] (0xc002ad53f0) (0xc0008ca0a0) Stream removed, broadcasting: 3
I0108 22:31:21.904137       9 log.go:172] (0xc002ad53f0) (0xc0001a3a40) Stream removed, broadcasting: 1
I0108 22:31:21.904193       9 log.go:172] (0xc002ad53f0) Go away received
I0108 22:31:21.904902       9 log.go:172] (0xc002ad53f0) (0xc0001a3a40) Stream removed, broadcasting: 1
I0108 22:31:21.904943       9 log.go:172] (0xc002ad53f0) (0xc0008ca0a0) Stream removed, broadcasting: 3
I0108 22:31:21.904956       9 log.go:172] (0xc002ad53f0) (0xc0016d4000) Stream removed, broadcasting: 5
Jan  8 22:31:21.904: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:31:21.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2559" for this suite.

• [SLOW TEST:11.317 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":225,"skipped":3773,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:31:21.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:31:21.978: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan  8 22:31:23.434: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:31:24.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2931" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":226,"skipped":3796,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:31:24.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  8 22:31:26.094: INFO: Pod name wrapped-volume-race-d4a2c654-8abb-4077-8747-31df7f51fdda: Found 0 pods out of 5
Jan  8 22:31:34.398: INFO: Pod name wrapped-volume-race-d4a2c654-8abb-4077-8747-31df7f51fdda: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d4a2c654-8abb-4077-8747-31df7f51fdda in namespace emptydir-wrapper-9329, will wait for the garbage collector to delete the pods
Jan  8 22:32:03.738: INFO: Deleting ReplicationController wrapped-volume-race-d4a2c654-8abb-4077-8747-31df7f51fdda took: 13.841537ms
Jan  8 22:32:04.238: INFO: Terminating ReplicationController wrapped-volume-race-d4a2c654-8abb-4077-8747-31df7f51fdda pods took: 500.449958ms
STEP: Creating RC which spawns configmap-volume pods
Jan  8 22:32:23.507: INFO: Pod name wrapped-volume-race-2a8f3ed9-216e-40c4-b7f9-5923866eaa5e: Found 0 pods out of 5
Jan  8 22:32:28.518: INFO: Pod name wrapped-volume-race-2a8f3ed9-216e-40c4-b7f9-5923866eaa5e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2a8f3ed9-216e-40c4-b7f9-5923866eaa5e in namespace emptydir-wrapper-9329, will wait for the garbage collector to delete the pods
Jan  8 22:32:52.648: INFO: Deleting ReplicationController wrapped-volume-race-2a8f3ed9-216e-40c4-b7f9-5923866eaa5e took: 14.341314ms
Jan  8 22:32:53.049: INFO: Terminating ReplicationController wrapped-volume-race-2a8f3ed9-216e-40c4-b7f9-5923866eaa5e pods took: 400.517157ms
STEP: Creating RC which spawns configmap-volume pods
Jan  8 22:33:04.827: INFO: Pod name wrapped-volume-race-3e635a82-52a8-451c-8e56-ef8b8dd27e0c: Found 0 pods out of 5
Jan  8 22:33:09.921: INFO: Pod name wrapped-volume-race-3e635a82-52a8-451c-8e56-ef8b8dd27e0c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3e635a82-52a8-451c-8e56-ef8b8dd27e0c in namespace emptydir-wrapper-9329, will wait for the garbage collector to delete the pods
Jan  8 22:33:36.027: INFO: Deleting ReplicationController wrapped-volume-race-3e635a82-52a8-451c-8e56-ef8b8dd27e0c took: 8.568903ms
Jan  8 22:33:36.428: INFO: Terminating ReplicationController wrapped-volume-race-3e635a82-52a8-451c-8e56-ef8b8dd27e0c pods took: 401.125262ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:33:54.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9329" for this suite.

• [SLOW TEST:149.473 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":227,"skipped":3826,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:33:54.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-7ca2e3b9-e9ce-469a-9596-d1b717fc7690
STEP: Creating a pod to test consume configMaps
Jan  8 22:33:54.434: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd" in namespace "configmap-4319" to be "success or failure"
Jan  8 22:33:54.498: INFO: Pod "pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd": Phase="Pending", Reason="", readiness=false. Elapsed: 63.951636ms
Jan  8 22:33:56.507: INFO: Pod "pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072937192s
Jan  8 22:33:58.516: INFO: Pod "pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08160839s
Jan  8 22:34:00.537: INFO: Pod "pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102597236s
Jan  8 22:34:02.555: INFO: Pod "pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120104024s
Jan  8 22:34:04.561: INFO: Pod "pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126490074s
Jan  8 22:34:07.080: INFO: Pod "pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.645984744s
STEP: Saw pod success
Jan  8 22:34:07.081: INFO: Pod "pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd" satisfied condition "success or failure"
Jan  8 22:34:07.097: INFO: Trying to get logs from node jerma-node pod pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd container configmap-volume-test: 
STEP: delete the pod
Jan  8 22:34:07.344: INFO: Waiting for pod pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd to disappear
Jan  8 22:34:07.427: INFO: Pod pod-configmaps-fd1cb426-10e2-4348-9633-5bb6357794bd no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:34:07.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4319" for this suite.

• [SLOW TEST:13.116 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3835,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:34:07.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  8 22:34:07.577: INFO: Waiting up to 5m0s for pod "pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e" in namespace "emptydir-5691" to be "success or failure"
Jan  8 22:34:07.594: INFO: Pod "pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.105402ms
Jan  8 22:34:09.600: INFO: Pod "pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022477074s
Jan  8 22:34:11.606: INFO: Pod "pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02821014s
Jan  8 22:34:13.614: INFO: Pod "pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0368332s
Jan  8 22:34:15.624: INFO: Pod "pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046730804s
STEP: Saw pod success
Jan  8 22:34:15.624: INFO: Pod "pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e" satisfied condition "success or failure"
Jan  8 22:34:15.629: INFO: Trying to get logs from node jerma-node pod pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e container test-container: 
STEP: delete the pod
Jan  8 22:34:15.781: INFO: Waiting for pod pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e to disappear
Jan  8 22:34:15.787: INFO: Pod pod-3a15dfd7-c379-4ea3-a9f2-30a5ba38bb6e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:34:15.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5691" for this suite.

• [SLOW TEST:8.348 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3874,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:34:15.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  8 22:34:26.050: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:34:26.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6803" for this suite.

• [SLOW TEST:10.474 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3891,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:34:26.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9661.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9661.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9661.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9661.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9661.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9661.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9661.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9661.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9661.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9661.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 54.231.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.231.54_udp@PTR;check="$$(dig +tcp +noall +answer +search 54.231.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.231.54_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9661.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9661.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9661.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9661.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9661.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9661.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9661.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9661.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9661.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9661.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9661.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 54.231.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.231.54_udp@PTR;check="$$(dig +tcp +noall +answer +search 54.231.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.231.54_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 22:34:36.641: INFO: Unable to read wheezy_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:36.654: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:36.659: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:36.665: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:36.712: INFO: Unable to read jessie_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:36.741: INFO: Unable to read jessie_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:36.748: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:36.754: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:36.796: INFO: Lookups using dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2 failed for: [wheezy_udp@dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_udp@dns-test-service.dns-9661.svc.cluster.local jessie_tcp@dns-test-service.dns-9661.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local]

Jan  8 22:34:41.805: INFO: Unable to read wheezy_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:41.813: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:41.822: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:41.834: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:41.895: INFO: Unable to read jessie_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:41.906: INFO: Unable to read jessie_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:41.914: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:41.921: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:41.964: INFO: Lookups using dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2 failed for: [wheezy_udp@dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_udp@dns-test-service.dns-9661.svc.cluster.local jessie_tcp@dns-test-service.dns-9661.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local]

Jan  8 22:34:46.805: INFO: Unable to read wheezy_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:46.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:46.826: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:46.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:46.902: INFO: Unable to read jessie_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:46.910: INFO: Unable to read jessie_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:46.915: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:46.920: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:46.953: INFO: Lookups using dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2 failed for: [wheezy_udp@dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_udp@dns-test-service.dns-9661.svc.cluster.local jessie_tcp@dns-test-service.dns-9661.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local]

Jan  8 22:34:51.812: INFO: Unable to read wheezy_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:51.826: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:51.838: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:51.846: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:51.913: INFO: Unable to read jessie_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:51.920: INFO: Unable to read jessie_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:51.939: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:51.947: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:51.977: INFO: Lookups using dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2 failed for: [wheezy_udp@dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_udp@dns-test-service.dns-9661.svc.cluster.local jessie_tcp@dns-test-service.dns-9661.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local]

Jan  8 22:34:56.804: INFO: Unable to read wheezy_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:56.809: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:56.813: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:56.817: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:56.843: INFO: Unable to read jessie_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:56.847: INFO: Unable to read jessie_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:56.853: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:56.858: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:34:56.891: INFO: Lookups using dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2 failed for: [wheezy_udp@dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_udp@dns-test-service.dns-9661.svc.cluster.local jessie_tcp@dns-test-service.dns-9661.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local]

Jan  8 22:35:01.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:35:01.825: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:35:01.830: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:35:01.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:35:01.885: INFO: Unable to read jessie_udp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:35:01.890: INFO: Unable to read jessie_tcp@dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:35:01.899: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:35:01.904: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local from pod dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2: the server could not find the requested resource (get pods dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2)
Jan  8 22:35:01.934: INFO: Lookups using dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2 failed for: [wheezy_udp@dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@dns-test-service.dns-9661.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_udp@dns-test-service.dns-9661.svc.cluster.local jessie_tcp@dns-test-service.dns-9661.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9661.svc.cluster.local]

Jan  8 22:35:06.936: INFO: DNS probes using dns-9661/dns-test-f516f664-ad26-4a7a-bba4-5fe613fc0bc2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:35:07.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9661" for this suite.

• [SLOW TEST:40.967 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":231,"skipped":3892,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:35:07.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:35:07.422: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097" in namespace "downward-api-4025" to be "success or failure"
Jan  8 22:35:07.436: INFO: Pod "downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097": Phase="Pending", Reason="", readiness=false. Elapsed: 13.521637ms
Jan  8 22:35:09.453: INFO: Pod "downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030108406s
Jan  8 22:35:11.464: INFO: Pod "downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04133344s
Jan  8 22:35:13.473: INFO: Pod "downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050739531s
Jan  8 22:35:15.492: INFO: Pod "downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069945914s
Jan  8 22:35:17.512: INFO: Pod "downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089990254s
STEP: Saw pod success
Jan  8 22:35:17.513: INFO: Pod "downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097" satisfied condition "success or failure"
Jan  8 22:35:17.519: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097 container client-container: 
STEP: delete the pod
Jan  8 22:35:17.575: INFO: Waiting for pod downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097 to disappear
Jan  8 22:35:17.582: INFO: Pod downwardapi-volume-26acb7eb-76b4-415a-a84a-2f40a2463097 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:35:17.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4025" for this suite.

• [SLOW TEST:10.357 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3931,"failed":0}
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:35:17.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan  8 22:35:17.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-826'
Jan  8 22:35:18.530: INFO: stderr: ""
Jan  8 22:35:18.530: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 22:35:18.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-826'
Jan  8 22:35:18.886: INFO: stderr: ""
Jan  8 22:35:18.887: INFO: stdout: "update-demo-nautilus-846xf update-demo-nautilus-s9rn8 "
Jan  8 22:35:18.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-846xf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-826'
Jan  8 22:35:19.042: INFO: stderr: ""
Jan  8 22:35:19.042: INFO: stdout: ""
Jan  8 22:35:19.042: INFO: update-demo-nautilus-846xf is created but not running
Jan  8 22:35:24.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-826'
Jan  8 22:35:25.244: INFO: stderr: ""
Jan  8 22:35:25.244: INFO: stdout: "update-demo-nautilus-846xf update-demo-nautilus-s9rn8 "
Jan  8 22:35:25.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-846xf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-826'
Jan  8 22:35:25.392: INFO: stderr: ""
Jan  8 22:35:25.392: INFO: stdout: ""
Jan  8 22:35:25.392: INFO: update-demo-nautilus-846xf is created but not running
Jan  8 22:35:30.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-826'
Jan  8 22:35:30.629: INFO: stderr: ""
Jan  8 22:35:30.629: INFO: stdout: "update-demo-nautilus-846xf update-demo-nautilus-s9rn8 "
Jan  8 22:35:30.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-846xf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-826'
Jan  8 22:35:30.786: INFO: stderr: ""
Jan  8 22:35:30.786: INFO: stdout: "true"
Jan  8 22:35:30.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-846xf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-826'
Jan  8 22:35:30.909: INFO: stderr: ""
Jan  8 22:35:30.909: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 22:35:30.909: INFO: validating pod update-demo-nautilus-846xf
Jan  8 22:35:30.914: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 22:35:30.914: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 22:35:30.914: INFO: update-demo-nautilus-846xf is verified up and running
Jan  8 22:35:30.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s9rn8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-826'
Jan  8 22:35:31.057: INFO: stderr: ""
Jan  8 22:35:31.057: INFO: stdout: "true"
Jan  8 22:35:31.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s9rn8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-826'
Jan  8 22:35:31.150: INFO: stderr: ""
Jan  8 22:35:31.150: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 22:35:31.150: INFO: validating pod update-demo-nautilus-s9rn8
Jan  8 22:35:31.158: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 22:35:31.158: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 22:35:31.159: INFO: update-demo-nautilus-s9rn8 is verified up and running
STEP: using delete to clean up resources
Jan  8 22:35:31.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-826'
Jan  8 22:35:31.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 22:35:31.327: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  8 22:35:31.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-826'
Jan  8 22:35:31.482: INFO: stderr: "No resources found in kubectl-826 namespace.\n"
Jan  8 22:35:31.482: INFO: stdout: ""
Jan  8 22:35:31.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-826 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  8 22:35:31.618: INFO: stderr: ""
Jan  8 22:35:31.618: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:35:31.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-826" for this suite.

• [SLOW TEST:14.088 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":233,"skipped":3931,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:35:31.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:35:32.699: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30" in namespace "projected-3250" to be "success or failure"
Jan  8 22:35:33.029: INFO: Pod "downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30": Phase="Pending", Reason="", readiness=false. Elapsed: 329.953915ms
Jan  8 22:35:35.036: INFO: Pod "downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337222303s
Jan  8 22:35:37.087: INFO: Pod "downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.387991779s
Jan  8 22:35:39.095: INFO: Pod "downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395729972s
Jan  8 22:35:41.103: INFO: Pod "downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.403898724s
Jan  8 22:35:43.110: INFO: Pod "downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.41117426s
STEP: Saw pod success
Jan  8 22:35:43.110: INFO: Pod "downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30" satisfied condition "success or failure"
Jan  8 22:35:43.114: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30 container client-container: 
STEP: delete the pod
Jan  8 22:35:43.277: INFO: Waiting for pod downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30 to disappear
Jan  8 22:35:43.284: INFO: Pod downwardapi-volume-c2213019-e2d7-4144-b267-b1e50996ec30 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:35:43.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3250" for this suite.

• [SLOW TEST:11.612 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3948,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:35:43.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:35:43.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b" in namespace "downward-api-8763" to be "success or failure"
Jan  8 22:35:43.472: INFO: Pod "downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.754282ms
Jan  8 22:35:45.480: INFO: Pod "downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033491497s
Jan  8 22:35:47.488: INFO: Pod "downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0412782s
Jan  8 22:35:49.497: INFO: Pod "downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050739961s
Jan  8 22:35:51.505: INFO: Pod "downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059011372s
Jan  8 22:35:53.547: INFO: Pod "downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100959419s
STEP: Saw pod success
Jan  8 22:35:53.548: INFO: Pod "downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b" satisfied condition "success or failure"
Jan  8 22:35:53.565: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b container client-container: 
STEP: delete the pod
Jan  8 22:35:53.702: INFO: Waiting for pod downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b to disappear
Jan  8 22:35:53.713: INFO: Pod downwardapi-volume-4520ea01-a998-4296-9d31-bd0d3f7c235b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:35:53.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8763" for this suite.

• [SLOW TEST:10.419 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3952,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:35:53.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  8 22:36:01.907: INFO: &Pod{ObjectMeta:{send-events-08b8ff38-e4a3-406b-a728-f108a900c375  events-3113 /api/v1/namespaces/events-3113/pods/send-events-08b8ff38-e4a3-406b-a728-f108a900c375 a3c188f7-b866-461b-a2d5-f8decf01736d 904290 0 2020-01-08 22:35:53 +0000 UTC   map[name:foo time:836670005] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jb9qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jb9qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jb9qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 22:35:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 22:35:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 22:35:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-08 22:35:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-08 22:35:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-08 22:35:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://0ee24787bab588fd09fdd3355ce4d7c2fdc33267187c6d2de0589c32680ee0fd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan  8 22:36:03.915: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  8 22:36:05.922: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:36:05.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3113" for this suite.

• [SLOW TEST:12.249 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":236,"skipped":3984,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:36:05.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3492.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3492.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3492.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3492.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3492.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3492.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 22:36:16.178: INFO: DNS probes using dns-3492/dns-test-bbb5af77-7c3e-410b-aa9c-9db487745c67 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:36:16.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3492" for this suite.

• [SLOW TEST:10.384 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":237,"skipped":4010,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:36:16.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:36:16.763: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 22:36:18.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:36:20.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:36:22.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:36:24.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119776, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:36:27.876: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:36:27.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5743-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:36:29.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7947" for this suite.
STEP: Destroying namespace "webhook-7947-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.946 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":238,"skipped":4028,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:36:29.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:36:30.000: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan  8 22:36:32.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119789, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:36:34.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119789, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:36:36.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119789, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:36:38.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119790, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119789, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:36:41.082: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:36:41.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:36:42.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8969" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:13.162 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":239,"skipped":4032,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:36:42.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  8 22:36:51.176: INFO: Successfully updated pod "pod-update-5dcee1d2-39ef-4437-935d-5d6bd47d9589"
STEP: verifying the updated pod is in kubernetes
Jan  8 22:36:51.182: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:36:51.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6187" for this suite.

• [SLOW TEST:8.768 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4061,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:36:51.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:36:52.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:36:54.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119812, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:36:56.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119812, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:36:58.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119812, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:37:00.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119812, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714119811, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:37:03.104: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:37:03.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-948" for this suite.
STEP: Destroying namespace "webhook-948-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.716 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":241,"skipped":4087,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:37:03.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  8 22:37:04.203: INFO: Waiting up to 5m0s for pod "pod-9b8ed690-2f94-402b-af70-0415404bba9e" in namespace "emptydir-3773" to be "success or failure"
Jan  8 22:37:04.215: INFO: Pod "pod-9b8ed690-2f94-402b-af70-0415404bba9e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.635865ms
Jan  8 22:37:06.223: INFO: Pod "pod-9b8ed690-2f94-402b-af70-0415404bba9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019765098s
Jan  8 22:37:08.230: INFO: Pod "pod-9b8ed690-2f94-402b-af70-0415404bba9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026830981s
Jan  8 22:37:10.236: INFO: Pod "pod-9b8ed690-2f94-402b-af70-0415404bba9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032764432s
Jan  8 22:37:12.244: INFO: Pod "pod-9b8ed690-2f94-402b-af70-0415404bba9e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040794141s
Jan  8 22:37:14.250: INFO: Pod "pod-9b8ed690-2f94-402b-af70-0415404bba9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04650753s
STEP: Saw pod success
Jan  8 22:37:14.250: INFO: Pod "pod-9b8ed690-2f94-402b-af70-0415404bba9e" satisfied condition "success or failure"
Jan  8 22:37:14.253: INFO: Trying to get logs from node jerma-node pod pod-9b8ed690-2f94-402b-af70-0415404bba9e container test-container: 
STEP: delete the pod
Jan  8 22:37:14.301: INFO: Waiting for pod pod-9b8ed690-2f94-402b-af70-0415404bba9e to disappear
Jan  8 22:37:14.326: INFO: Pod pod-9b8ed690-2f94-402b-af70-0415404bba9e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:37:14.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3773" for this suite.

• [SLOW TEST:10.382 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4094,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:37:14.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan  8 22:37:14.480: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:37:28.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9132" for this suite.

• [SLOW TEST:14.606 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4103,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:37:28.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-g9b4
STEP: Creating a pod to test atomic-volume-subpath
Jan  8 22:37:29.196: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-g9b4" in namespace "subpath-4773" to be "success or failure"
Jan  8 22:37:29.211: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.2344ms
Jan  8 22:37:31.221: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025463031s
Jan  8 22:37:33.229: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032646028s
Jan  8 22:37:35.235: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039434157s
Jan  8 22:37:37.243: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 8.046929827s
Jan  8 22:37:39.251: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 10.05457026s
Jan  8 22:37:41.258: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 12.061827562s
Jan  8 22:37:43.266: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 14.069487254s
Jan  8 22:37:45.273: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 16.077091909s
Jan  8 22:37:47.281: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 18.084673741s
Jan  8 22:37:49.290: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 20.093961273s
Jan  8 22:37:51.301: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 22.104926438s
Jan  8 22:37:53.310: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 24.113688126s
Jan  8 22:37:55.318: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Running", Reason="", readiness=true. Elapsed: 26.121682769s
Jan  8 22:37:57.328: INFO: Pod "pod-subpath-test-downwardapi-g9b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.132169674s
STEP: Saw pod success
Jan  8 22:37:57.328: INFO: Pod "pod-subpath-test-downwardapi-g9b4" satisfied condition "success or failure"
Jan  8 22:37:57.334: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-g9b4 container test-container-subpath-downwardapi-g9b4: 
STEP: delete the pod
Jan  8 22:37:57.394: INFO: Waiting for pod pod-subpath-test-downwardapi-g9b4 to disappear
Jan  8 22:37:57.401: INFO: Pod pod-subpath-test-downwardapi-g9b4 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-g9b4
Jan  8 22:37:57.401: INFO: Deleting pod "pod-subpath-test-downwardapi-g9b4" in namespace "subpath-4773"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:37:57.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4773" for this suite.

• [SLOW TEST:28.521 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":244,"skipped":4121,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:37:57.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  8 22:37:57.674: INFO: Waiting up to 5m0s for pod "pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22" in namespace "emptydir-3121" to be "success or failure"
Jan  8 22:37:57.706: INFO: Pod "pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22": Phase="Pending", Reason="", readiness=false. Elapsed: 32.469697ms
Jan  8 22:37:59.713: INFO: Pod "pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038889812s
Jan  8 22:38:01.720: INFO: Pod "pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046576464s
Jan  8 22:38:03.730: INFO: Pod "pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056565646s
Jan  8 22:38:05.737: INFO: Pod "pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062888916s
STEP: Saw pod success
Jan  8 22:38:05.737: INFO: Pod "pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22" satisfied condition "success or failure"
Jan  8 22:38:05.741: INFO: Trying to get logs from node jerma-node pod pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22 container test-container: 
STEP: delete the pod
Jan  8 22:38:05.834: INFO: Waiting for pod pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22 to disappear
Jan  8 22:38:05.863: INFO: Pod pod-59a8e966-4fb3-4a9e-8426-d65876f7ca22 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:38:05.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3121" for this suite.

• [SLOW TEST:8.404 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4127,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:38:05.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-5qxr7 in namespace proxy-5988
I0108 22:38:06.103694       9 runners.go:189] Created replication controller with name: proxy-service-5qxr7, namespace: proxy-5988, replica count: 1
I0108 22:38:07.156327       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:38:08.156711       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:38:09.157058       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:38:10.157460       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:38:11.157765       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:38:12.158266       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:38:13.158827       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0108 22:38:14.159341       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0108 22:38:15.160188       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0108 22:38:16.161084       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0108 22:38:17.161666       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0108 22:38:18.162180       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0108 22:38:19.162714       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0108 22:38:20.163556       9 runners.go:189] proxy-service-5qxr7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  8 22:38:20.168: INFO: setup took 14.21233313s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  8 22:38:20.198: INFO: (0) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 28.870141ms)
Jan  8 22:38:20.198: INFO: (0) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 29.24302ms)
Jan  8 22:38:20.198: INFO: (0) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 29.320451ms)
Jan  8 22:38:20.198: INFO: (0) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 29.586481ms)
Jan  8 22:38:20.201: INFO: (0) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 32.110412ms)
Jan  8 22:38:20.201: INFO: (0) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 31.346391ms)
Jan  8 22:38:20.201: INFO: (0) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 31.793999ms)
Jan  8 22:38:20.201: INFO: (0) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 32.775094ms)
Jan  8 22:38:20.203: INFO: (0) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 33.805632ms)
Jan  8 22:38:20.203: INFO: (0) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 34.731284ms)
Jan  8 22:38:20.203: INFO: (0) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 34.824665ms)
Jan  8 22:38:20.204: INFO: (0) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 35.120912ms)
Jan  8 22:38:20.205: INFO: (0) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 35.67291ms)
Jan  8 22:38:20.205: INFO: (0) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 36.512776ms)
Jan  8 22:38:20.205: INFO: (0) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 36.262982ms)
Jan  8 22:38:20.212: INFO: (0) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test<... (200; 12.575409ms)
Jan  8 22:38:20.225: INFO: (1) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 13.01557ms)
Jan  8 22:38:20.225: INFO: (1) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 12.987671ms)
Jan  8 22:38:20.233: INFO: (1) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 20.168587ms)
Jan  8 22:38:20.233: INFO: (1) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 20.59634ms)
Jan  8 22:38:20.233: INFO: (1) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 20.989259ms)
Jan  8 22:38:20.233: INFO: (1) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 20.655879ms)
Jan  8 22:38:20.233: INFO: (1) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 20.609329ms)
Jan  8 22:38:20.233: INFO: (1) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 21.413244ms)
Jan  8 22:38:20.233: INFO: (1) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 21.078737ms)
Jan  8 22:38:20.233: INFO: (1) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 21.371425ms)
Jan  8 22:38:20.244: INFO: (2) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 10.791488ms)
Jan  8 22:38:20.245: INFO: (2) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 10.640237ms)
Jan  8 22:38:20.245: INFO: (2) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 11.01476ms)
Jan  8 22:38:20.245: INFO: (2) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 11.010257ms)
Jan  8 22:38:20.245: INFO: (2) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 10.963269ms)
Jan  8 22:38:20.245: INFO: (2) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test (200; 9.078427ms)
Jan  8 22:38:20.258: INFO: (3) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 9.153817ms)
Jan  8 22:38:20.259: INFO: (3) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 9.894294ms)
Jan  8 22:38:20.259: INFO: (3) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 10.096906ms)
Jan  8 22:38:20.260: INFO: (3) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 10.422564ms)
Jan  8 22:38:20.260: INFO: (3) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test<... (200; 9.534032ms)
Jan  8 22:38:20.278: INFO: (4) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 10.192035ms)
Jan  8 22:38:20.278: INFO: (4) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 10.367149ms)
Jan  8 22:38:20.278: INFO: (4) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 10.515921ms)
Jan  8 22:38:20.279: INFO: (4) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 11.68768ms)
Jan  8 22:38:20.279: INFO: (4) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 11.106941ms)
Jan  8 22:38:20.279: INFO: (4) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 11.583765ms)
Jan  8 22:38:20.279: INFO: (4) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 12.094794ms)
Jan  8 22:38:20.280: INFO: (4) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 12.398715ms)
Jan  8 22:38:20.281: INFO: (4) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 13.036244ms)
Jan  8 22:38:20.282: INFO: (4) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 14.256998ms)
Jan  8 22:38:20.282: INFO: (4) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 14.553121ms)
Jan  8 22:38:20.291: INFO: (5) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 7.877884ms)
Jan  8 22:38:20.291: INFO: (5) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 7.853674ms)
Jan  8 22:38:20.291: INFO: (5) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 8.436763ms)
Jan  8 22:38:20.291: INFO: (5) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 8.604346ms)
Jan  8 22:38:20.293: INFO: (5) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 10.653439ms)
Jan  8 22:38:20.295: INFO: (5) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 11.832735ms)
Jan  8 22:38:20.295: INFO: (5) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 12.409641ms)
Jan  8 22:38:20.295: INFO: (5) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 12.32322ms)
Jan  8 22:38:20.295: INFO: (5) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 12.863661ms)
Jan  8 22:38:20.296: INFO: (5) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 12.941234ms)
Jan  8 22:38:20.296: INFO: (5) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 13.400442ms)
Jan  8 22:38:20.296: INFO: (5) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 13.419145ms)
Jan  8 22:38:20.296: INFO: (5) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 13.695813ms)
Jan  8 22:38:20.297: INFO: (5) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 14.711123ms)
Jan  8 22:38:20.297: INFO: (5) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 14.754424ms)
Jan  8 22:38:20.298: INFO: (5) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test (200; 10.005808ms)
Jan  8 22:38:20.311: INFO: (6) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test<... (200; 12.462993ms)
Jan  8 22:38:20.312: INFO: (6) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 13.545943ms)
Jan  8 22:38:20.312: INFO: (6) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 14.055178ms)
Jan  8 22:38:20.312: INFO: (6) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 13.597042ms)
Jan  8 22:38:20.312: INFO: (6) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 14.337642ms)
Jan  8 22:38:20.315: INFO: (6) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 17.346937ms)
Jan  8 22:38:20.315: INFO: (6) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 16.310143ms)
Jan  8 22:38:20.315: INFO: (6) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 15.991724ms)
Jan  8 22:38:20.315: INFO: (6) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 16.590768ms)
Jan  8 22:38:20.316: INFO: (6) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 16.318406ms)
Jan  8 22:38:20.316: INFO: (6) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 16.818574ms)
Jan  8 22:38:20.322: INFO: (7) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 5.322359ms)
Jan  8 22:38:20.322: INFO: (7) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 5.539042ms)
Jan  8 22:38:20.322: INFO: (7) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 5.98328ms)
Jan  8 22:38:20.324: INFO: (7) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 6.846999ms)
Jan  8 22:38:20.324: INFO: (7) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 7.095699ms)
Jan  8 22:38:20.325: INFO: (7) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 8.5233ms)
Jan  8 22:38:20.327: INFO: (7) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 9.908449ms)
Jan  8 22:38:20.328: INFO: (7) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 10.851051ms)
Jan  8 22:38:20.328: INFO: (7) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 11.161635ms)
Jan  8 22:38:20.328: INFO: (7) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 11.294517ms)
Jan  8 22:38:20.328: INFO: (7) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 11.377512ms)
Jan  8 22:38:20.328: INFO: (7) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test (200; 7.905011ms)
Jan  8 22:38:20.337: INFO: (8) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 7.905204ms)
Jan  8 22:38:20.340: INFO: (8) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 10.793797ms)
Jan  8 22:38:20.340: INFO: (8) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 10.861954ms)
Jan  8 22:38:20.340: INFO: (8) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 10.939585ms)
Jan  8 22:38:20.340: INFO: (8) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 11.050362ms)
Jan  8 22:38:20.341: INFO: (8) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 11.085943ms)
Jan  8 22:38:20.341: INFO: (8) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 11.529804ms)
Jan  8 22:38:20.341: INFO: (8) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 11.75778ms)
Jan  8 22:38:20.341: INFO: (8) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 11.80062ms)
Jan  8 22:38:20.342: INFO: (8) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 12.459099ms)
Jan  8 22:38:20.346: INFO: (9) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 4.120105ms)
Jan  8 22:38:20.358: INFO: (9) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 15.477677ms)
Jan  8 22:38:20.358: INFO: (9) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 15.652433ms)
Jan  8 22:38:20.358: INFO: (9) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 15.884285ms)
Jan  8 22:38:20.359: INFO: (9) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 16.362357ms)
Jan  8 22:38:20.361: INFO: (9) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: ... (200; 25.52503ms)
Jan  8 22:38:20.370: INFO: (9) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 26.323503ms)
Jan  8 22:38:20.370: INFO: (9) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 26.707918ms)
Jan  8 22:38:20.370: INFO: (9) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 26.709046ms)
Jan  8 22:38:20.370: INFO: (9) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 27.475918ms)
Jan  8 22:38:20.372: INFO: (9) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 29.1155ms)
Jan  8 22:38:20.380: INFO: (10) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 7.319141ms)
Jan  8 22:38:20.380: INFO: (10) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 7.528554ms)
Jan  8 22:38:20.384: INFO: (10) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 10.898687ms)
Jan  8 22:38:20.384: INFO: (10) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 11.094372ms)
Jan  8 22:38:20.384: INFO: (10) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 11.628021ms)
Jan  8 22:38:20.384: INFO: (10) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 11.608283ms)
Jan  8 22:38:20.385: INFO: (10) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 12.115116ms)
Jan  8 22:38:20.385: INFO: (10) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 12.569519ms)
Jan  8 22:38:20.386: INFO: (10) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: ... (200; 13.787592ms)
Jan  8 22:38:20.388: INFO: (10) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 15.247178ms)
Jan  8 22:38:20.388: INFO: (10) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 15.320941ms)
Jan  8 22:38:20.389: INFO: (10) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 16.836911ms)
Jan  8 22:38:20.389: INFO: (10) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 16.450308ms)
Jan  8 22:38:20.389: INFO: (10) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 16.361322ms)
Jan  8 22:38:20.389: INFO: (10) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 17.040676ms)
Jan  8 22:38:20.404: INFO: (11) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 14.487966ms)
Jan  8 22:38:20.404: INFO: (11) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 14.636996ms)
Jan  8 22:38:20.404: INFO: (11) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 14.785912ms)
Jan  8 22:38:20.404: INFO: (11) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 14.502732ms)
Jan  8 22:38:20.404: INFO: (11) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 14.63052ms)
Jan  8 22:38:20.404: INFO: (11) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 14.619776ms)
Jan  8 22:38:20.405: INFO: (11) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 15.990393ms)
Jan  8 22:38:20.406: INFO: (11) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 15.866546ms)
Jan  8 22:38:20.406: INFO: (11) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 15.882279ms)
Jan  8 22:38:20.406: INFO: (11) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test (200; 16.879195ms)
Jan  8 22:38:20.407: INFO: (11) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 16.812652ms)
Jan  8 22:38:20.413: INFO: (12) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 6.685912ms)
Jan  8 22:38:20.413: INFO: (12) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 6.655165ms)
Jan  8 22:38:20.415: INFO: (12) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 8.429108ms)
Jan  8 22:38:20.415: INFO: (12) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test<... (200; 9.681967ms)
Jan  8 22:38:20.416: INFO: (12) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 9.537628ms)
Jan  8 22:38:20.417: INFO: (12) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 9.757118ms)
Jan  8 22:38:20.417: INFO: (12) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 9.983982ms)
Jan  8 22:38:20.417: INFO: (12) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 9.934739ms)
Jan  8 22:38:20.417: INFO: (12) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 10.061558ms)
Jan  8 22:38:20.417: INFO: (12) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 10.153696ms)
Jan  8 22:38:20.417: INFO: (12) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 10.107702ms)
Jan  8 22:38:20.417: INFO: (12) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 10.285834ms)
Jan  8 22:38:20.424: INFO: (13) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: ... (200; 10.097829ms)
Jan  8 22:38:20.428: INFO: (13) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 9.901214ms)
Jan  8 22:38:20.428: INFO: (13) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 9.802005ms)
Jan  8 22:38:20.428: INFO: (13) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 9.816988ms)
Jan  8 22:38:20.428: INFO: (13) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 10.461032ms)
Jan  8 22:38:20.428: INFO: (13) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 9.979198ms)
Jan  8 22:38:20.429: INFO: (13) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 10.922568ms)
Jan  8 22:38:20.429: INFO: (13) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 10.861885ms)
Jan  8 22:38:20.429: INFO: (13) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 10.951023ms)
Jan  8 22:38:20.429: INFO: (13) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 11.245011ms)
Jan  8 22:38:20.429: INFO: (13) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 11.616788ms)
Jan  8 22:38:20.430: INFO: (13) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 12.067211ms)
Jan  8 22:38:20.432: INFO: (13) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 13.48758ms)
Jan  8 22:38:20.441: INFO: (14) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 9.426282ms)
Jan  8 22:38:20.441: INFO: (14) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 9.470764ms)
Jan  8 22:38:20.448: INFO: (14) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 16.223595ms)
Jan  8 22:38:20.448: INFO: (14) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 16.129356ms)
Jan  8 22:38:20.448: INFO: (14) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 16.119842ms)
Jan  8 22:38:20.448: INFO: (14) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test (200; 20.936415ms)
Jan  8 22:38:20.453: INFO: (14) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 20.932205ms)
Jan  8 22:38:20.455: INFO: (14) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 22.549317ms)
Jan  8 22:38:20.459: INFO: (15) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 4.33778ms)
Jan  8 22:38:20.463: INFO: (15) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 7.969535ms)
Jan  8 22:38:20.463: INFO: (15) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test (200; 16.480812ms)
Jan  8 22:38:20.472: INFO: (15) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 16.481584ms)
Jan  8 22:38:20.472: INFO: (15) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 16.672888ms)
Jan  8 22:38:20.472: INFO: (15) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 17.280221ms)
Jan  8 22:38:20.484: INFO: (16) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 11.600269ms)
Jan  8 22:38:20.484: INFO: (16) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 11.641175ms)
Jan  8 22:38:20.484: INFO: (16) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 11.572095ms)
Jan  8 22:38:20.484: INFO: (16) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 11.797198ms)
Jan  8 22:38:20.490: INFO: (16) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: ... (200; 21.443047ms)
Jan  8 22:38:20.496: INFO: (16) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 24.051316ms)
Jan  8 22:38:20.499: INFO: (16) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 26.810684ms)
Jan  8 22:38:20.500: INFO: (16) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 27.664082ms)
Jan  8 22:38:20.501: INFO: (16) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 28.382786ms)
Jan  8 22:38:20.502: INFO: (16) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 29.576685ms)
Jan  8 22:38:20.503: INFO: (16) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 30.947932ms)
Jan  8 22:38:20.505: INFO: (16) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 32.728172ms)
Jan  8 22:38:20.505: INFO: (16) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 32.647979ms)
Jan  8 22:38:20.509: INFO: (16) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 36.484269ms)
Jan  8 22:38:20.527: INFO: (17) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 18.459298ms)
Jan  8 22:38:20.528: INFO: (17) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 18.964483ms)
Jan  8 22:38:20.528: INFO: (17) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 18.92956ms)
Jan  8 22:38:20.529: INFO: (17) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 19.314473ms)
Jan  8 22:38:20.534: INFO: (17) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 24.699173ms)
Jan  8 22:38:20.534: INFO: (17) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 24.600288ms)
Jan  8 22:38:20.534: INFO: (17) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: ... (200; 25.625942ms)
Jan  8 22:38:20.535: INFO: (17) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 25.945444ms)
Jan  8 22:38:20.536: INFO: (17) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 26.451689ms)
Jan  8 22:38:20.536: INFO: (17) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 26.372702ms)
Jan  8 22:38:20.536: INFO: (17) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 26.974347ms)
Jan  8 22:38:20.565: INFO: (17) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 55.512405ms)
Jan  8 22:38:20.565: INFO: (17) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 55.503288ms)
Jan  8 22:38:20.589: INFO: (18) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:1080/proxy/: test<... (200; 23.422506ms)
Jan  8 22:38:20.589: INFO: (18) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 23.027891ms)
Jan  8 22:38:20.589: INFO: (18) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 23.745936ms)
Jan  8 22:38:20.596: INFO: (18) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:460/proxy/: tls baz (200; 30.374782ms)
Jan  8 22:38:20.596: INFO: (18) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 30.879894ms)
Jan  8 22:38:20.596: INFO: (18) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 30.444991ms)
Jan  8 22:38:20.596: INFO: (18) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m/proxy/: test (200; 30.420713ms)
Jan  8 22:38:20.596: INFO: (18) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:160/proxy/: foo (200; 30.389899ms)
Jan  8 22:38:20.596: INFO: (18) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 30.543287ms)
Jan  8 22:38:20.597: INFO: (18) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname1/proxy/: foo (200; 30.614898ms)
Jan  8 22:38:20.597: INFO: (18) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 30.667245ms)
Jan  8 22:38:20.597: INFO: (18) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test (200; 37.946954ms)
Jan  8 22:38:20.638: INFO: (19) /api/v1/namespaces/proxy-5988/pods/proxy-service-5qxr7-jdg7m:162/proxy/: bar (200; 39.589587ms)
Jan  8 22:38:20.639: INFO: (19) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname2/proxy/: tls qux (200; 40.397118ms)
Jan  8 22:38:20.640: INFO: (19) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:462/proxy/: tls qux (200; 40.888341ms)
Jan  8 22:38:20.640: INFO: (19) /api/v1/namespaces/proxy-5988/pods/http:proxy-service-5qxr7-jdg7m:1080/proxy/: ... (200; 40.994698ms)
Jan  8 22:38:20.642: INFO: (19) /api/v1/namespaces/proxy-5988/pods/https:proxy-service-5qxr7-jdg7m:443/proxy/: test<... (200; 43.52492ms)
Jan  8 22:38:20.643: INFO: (19) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname1/proxy/: foo (200; 44.986667ms)
Jan  8 22:38:20.654: INFO: (19) /api/v1/namespaces/proxy-5988/services/proxy-service-5qxr7:portname2/proxy/: bar (200; 55.781894ms)
Jan  8 22:38:20.655: INFO: (19) /api/v1/namespaces/proxy-5988/services/http:proxy-service-5qxr7:portname2/proxy/: bar (200; 55.678147ms)
Jan  8 22:38:20.655: INFO: (19) /api/v1/namespaces/proxy-5988/services/https:proxy-service-5qxr7:tlsportname1/proxy/: tls baz (200; 56.341691ms)
STEP: deleting ReplicationController proxy-service-5qxr7 in namespace proxy-5988, will wait for the garbage collector to delete the pods
Jan  8 22:38:20.734: INFO: Deleting ReplicationController proxy-service-5qxr7 took: 14.491059ms
Jan  8 22:38:21.035: INFO: Terminating ReplicationController proxy-service-5qxr7 pods took: 300.937172ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:38:32.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5988" for this suite.

• [SLOW TEST:26.583 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":246,"skipped":4167,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:38:32.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-574 to expose endpoints map[]
Jan  8 22:38:32.594: INFO: Get endpoints failed (12.34294ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  8 22:38:33.635: INFO: successfully validated that service endpoint-test2 in namespace services-574 exposes endpoints map[] (1.053479298s elapsed)
STEP: Creating pod pod1 in namespace services-574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-574 to expose endpoints map[pod1:[80]]
Jan  8 22:38:37.745: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.094511381s elapsed, will retry)
Jan  8 22:38:41.812: INFO: successfully validated that service endpoint-test2 in namespace services-574 exposes endpoints map[pod1:[80]] (8.161653282s elapsed)
STEP: Creating pod pod2 in namespace services-574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-574 to expose endpoints map[pod1:[80] pod2:[80]]
Jan  8 22:38:46.589: INFO: Unexpected endpoints: found map[0c7d072c-a494-46a8-952f-f39603b966cd:[80]], expected map[pod1:[80] pod2:[80]] (4.771878553s elapsed, will retry)
Jan  8 22:38:48.640: INFO: successfully validated that service endpoint-test2 in namespace services-574 exposes endpoints map[pod1:[80] pod2:[80]] (6.822731811s elapsed)
STEP: Deleting pod pod1 in namespace services-574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-574 to expose endpoints map[pod2:[80]]
Jan  8 22:38:49.731: INFO: successfully validated that service endpoint-test2 in namespace services-574 exposes endpoints map[pod2:[80]] (1.083444134s elapsed)
STEP: Deleting pod pod2 in namespace services-574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-574 to expose endpoints map[]
Jan  8 22:38:49.793: INFO: successfully validated that service endpoint-test2 in namespace services-574 exposes endpoints map[] (43.531344ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:38:49.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-574" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.488 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":247,"skipped":4173,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:38:49.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Jan  8 22:38:50.053: INFO: Waiting up to 5m0s for pod "client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168" in namespace "containers-1942" to be "success or failure"
Jan  8 22:38:50.064: INFO: Pod "client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168": Phase="Pending", Reason="", readiness=false. Elapsed: 11.250074ms
Jan  8 22:38:52.069: INFO: Pod "client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016528584s
Jan  8 22:38:54.076: INFO: Pod "client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022993223s
Jan  8 22:38:56.087: INFO: Pod "client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034101934s
Jan  8 22:38:58.135: INFO: Pod "client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08247058s
STEP: Saw pod success
Jan  8 22:38:58.135: INFO: Pod "client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168" satisfied condition "success or failure"
Jan  8 22:38:58.141: INFO: Trying to get logs from node jerma-node pod client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168 container test-container: 
STEP: delete the pod
Jan  8 22:38:58.232: INFO: Waiting for pod client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168 to disappear
Jan  8 22:38:58.252: INFO: Pod client-containers-1c10a653-7bf6-4f83-85ad-6ad2ac816168 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:38:58.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1942" for this suite.

• [SLOW TEST:8.308 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4183,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:38:58.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-c94c2875-3b35-4557-b5b5-d71fd71efbdd
STEP: Creating a pod to test consume configMaps
Jan  8 22:38:58.429: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1" in namespace "projected-8883" to be "success or failure"
Jan  8 22:38:58.462: INFO: Pod "pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.663576ms
Jan  8 22:39:00.475: INFO: Pod "pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04629867s
Jan  8 22:39:02.484: INFO: Pod "pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055517711s
Jan  8 22:39:04.496: INFO: Pod "pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067238057s
Jan  8 22:39:06.513: INFO: Pod "pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084641851s
Jan  8 22:39:08.526: INFO: Pod "pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097735778s
STEP: Saw pod success
Jan  8 22:39:08.527: INFO: Pod "pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1" satisfied condition "success or failure"
Jan  8 22:39:08.533: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 22:39:08.652: INFO: Waiting for pod pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1 to disappear
Jan  8 22:39:08.671: INFO: Pod pod-projected-configmaps-d47b8c5a-fefc-468a-b00e-71c6e715d4b1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:39:08.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8883" for this suite.

• [SLOW TEST:10.434 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4183,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:39:08.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan  8 22:39:08.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8600'
Jan  8 22:39:10.933: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  8 22:39:10.933: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan  8 22:39:10.963: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-4dns2]
Jan  8 22:39:10.964: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-4dns2" in namespace "kubectl-8600" to be "running and ready"
Jan  8 22:39:10.968: INFO: Pod "e2e-test-httpd-rc-4dns2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.647655ms
Jan  8 22:39:12.977: INFO: Pod "e2e-test-httpd-rc-4dns2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013190832s
Jan  8 22:39:14.990: INFO: Pod "e2e-test-httpd-rc-4dns2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025849618s
Jan  8 22:39:16.996: INFO: Pod "e2e-test-httpd-rc-4dns2": Phase="Running", Reason="", readiness=true. Elapsed: 6.032338836s
Jan  8 22:39:16.996: INFO: Pod "e2e-test-httpd-rc-4dns2" satisfied condition "running and ready"
Jan  8 22:39:16.996: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-4dns2]
Jan  8 22:39:16.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8600'
Jan  8 22:39:17.177: INFO: stderr: ""
Jan  8 22:39:17.178: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Wed Jan 08 22:39:16.631516 2020] [mpm_event:notice] [pid 1:tid 140612421864296] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Jan 08 22:39:16.631626 2020] [core:notice] [pid 1:tid 140612421864296] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan  8 22:39:17.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8600'
Jan  8 22:39:17.357: INFO: stderr: ""
Jan  8 22:39:17.357: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:39:17.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8600" for this suite.

• [SLOW TEST:8.691 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":250,"skipped":4206,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:39:17.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3118
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan  8 22:39:17.530: INFO: Found 0 stateful pods, waiting for 3
Jan  8 22:39:27.539: INFO: Found 2 stateful pods, waiting for 3
Jan  8 22:39:37.540: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 22:39:37.540: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 22:39:37.540: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  8 22:39:47.539: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 22:39:47.539: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 22:39:47.539: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan  8 22:39:47.584: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  8 22:39:57.679: INFO: Updating stateful set ss2
Jan  8 22:39:57.770: INFO: Waiting for Pod statefulset-3118/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan  8 22:40:08.149: INFO: Found 2 stateful pods, waiting for 3
Jan  8 22:40:18.166: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 22:40:18.166: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 22:40:18.167: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  8 22:40:28.164: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 22:40:28.164: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 22:40:28.164: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  8 22:40:28.217: INFO: Updating stateful set ss2
Jan  8 22:40:28.299: INFO: Waiting for Pod statefulset-3118/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan  8 22:40:38.759: INFO: Updating stateful set ss2
Jan  8 22:40:38.797: INFO: Waiting for StatefulSet statefulset-3118/ss2 to complete update
Jan  8 22:40:38.797: INFO: Waiting for Pod statefulset-3118/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan  8 22:40:48.811: INFO: Waiting for StatefulSet statefulset-3118/ss2 to complete update
Jan  8 22:40:48.811: INFO: Waiting for Pod statefulset-3118/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan  8 22:40:58.811: INFO: Waiting for StatefulSet statefulset-3118/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan  8 22:41:08.815: INFO: Deleting all statefulset in ns statefulset-3118
Jan  8 22:41:08.819: INFO: Scaling statefulset ss2 to 0
Jan  8 22:41:48.858: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 22:41:48.865: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:41:48.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3118" for this suite.

• [SLOW TEST:151.568 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":251,"skipped":4225,"failed":0}
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:41:48.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:41:49.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2028" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":252,"skipped":4227,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:41:49.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan  8 22:41:49.401: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682" in namespace "projected-5646" to be "success or failure"
Jan  8 22:41:49.445: INFO: Pod "downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682": Phase="Pending", Reason="", readiness=false. Elapsed: 44.319316ms
Jan  8 22:41:51.460: INFO: Pod "downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059092088s
Jan  8 22:41:53.471: INFO: Pod "downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069653746s
Jan  8 22:41:55.481: INFO: Pod "downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079459728s
Jan  8 22:41:57.490: INFO: Pod "downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088795379s
STEP: Saw pod success
Jan  8 22:41:57.490: INFO: Pod "downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682" satisfied condition "success or failure"
Jan  8 22:41:57.494: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682 container client-container: 
STEP: delete the pod
Jan  8 22:41:57.557: INFO: Waiting for pod downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682 to disappear
Jan  8 22:41:57.570: INFO: Pod downwardapi-volume-af62f3d1-5b96-464f-a1e1-620a54048682 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:41:57.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5646" for this suite.

• [SLOW TEST:8.299 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4234,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:41:57.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  8 22:42:05.841: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:42:05.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9986" for this suite.

• [SLOW TEST:8.310 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4279,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:42:05.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan  8 22:42:06.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1733'
Jan  8 22:42:06.200: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  8 22:42:06.200: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Jan  8 22:42:06.268: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  8 22:42:06.290: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  8 22:42:06.319: INFO: scanned /root for discovery docs: 
Jan  8 22:42:06.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1733'
Jan  8 22:42:28.607: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  8 22:42:28.607: INFO: stdout: "Created e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5\nScaling up e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jan  8 22:42:28.608: INFO: stdout: "Created e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5\nScaling up e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jan  8 22:42:28.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1733'
Jan  8 22:42:28.732: INFO: stderr: ""
Jan  8 22:42:28.732: INFO: stdout: "e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5-52mgc e2e-test-httpd-rc-9tqw5 "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Jan  8 22:42:33.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1733'
Jan  8 22:42:33.915: INFO: stderr: ""
Jan  8 22:42:33.915: INFO: stdout: "e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5-52mgc "
Jan  8 22:42:33.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5-52mgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1733'
Jan  8 22:42:34.121: INFO: stderr: ""
Jan  8 22:42:34.121: INFO: stdout: "true"
Jan  8 22:42:34.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5-52mgc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1733'
Jan  8 22:42:34.243: INFO: stderr: ""
Jan  8 22:42:34.243: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jan  8 22:42:34.243: INFO: e2e-test-httpd-rc-2e701bd154e64a15fa17ba19e13093a5-52mgc is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Jan  8 22:42:34.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1733'
Jan  8 22:42:34.473: INFO: stderr: ""
Jan  8 22:42:34.473: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:42:34.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1733" for this suite.

• [SLOW TEST:28.588 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":255,"skipped":4307,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:42:34.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-2fc308d8-9910-4098-948b-9d5321e88657
STEP: Creating secret with name s-test-opt-upd-b75a5d16-3bf1-4268-ba35-790f3eeeaa65
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-2fc308d8-9910-4098-948b-9d5321e88657
STEP: Updating secret s-test-opt-upd-b75a5d16-3bf1-4268-ba35-790f3eeeaa65
STEP: Creating secret with name s-test-opt-create-53f0754e-28e8-46fb-96bb-e1dd8d10355c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:43:55.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2278" for this suite.

• [SLOW TEST:81.157 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4309,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:43:55.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan  8 22:44:05.605: INFO: Successfully updated pod "annotationupdated1945892-6e91-4e69-827b-c893e35f4bc1"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:44:09.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9846" for this suite.

• [SLOW TEST:14.111 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4320,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:44:09.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-9605a1ea-e8b0-4ff8-b965-1f1374a5e9d7
STEP: Creating configMap with name cm-test-opt-upd-c6f3a982-b58a-43a8-8363-d5806961f379
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9605a1ea-e8b0-4ff8-b965-1f1374a5e9d7
STEP: Updating configmap cm-test-opt-upd-c6f3a982-b58a-43a8-8363-d5806961f379
STEP: Creating configMap with name cm-test-opt-create-3359b9d6-c67f-4563-bd8f-f727a4659dfe
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:44:26.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4028" for this suite.

• [SLOW TEST:17.062 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4351,"failed":0}
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:44:26.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-a55b8718-3072-4cfb-a7c6-4eb1de651855
STEP: Creating a pod to test consume secrets
Jan  8 22:44:26.954: INFO: Waiting up to 5m0s for pod "pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62" in namespace "secrets-9836" to be "success or failure"
Jan  8 22:44:26.966: INFO: Pod "pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62": Phase="Pending", Reason="", readiness=false. Elapsed: 12.361861ms
Jan  8 22:44:28.977: INFO: Pod "pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023803685s
Jan  8 22:44:30.983: INFO: Pod "pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02894268s
Jan  8 22:44:32.990: INFO: Pod "pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036714513s
Jan  8 22:44:34.998: INFO: Pod "pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04424778s
Jan  8 22:44:37.004: INFO: Pod "pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050375996s
Jan  8 22:44:39.010: INFO: Pod "pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.055989577s
STEP: Saw pod success
Jan  8 22:44:39.010: INFO: Pod "pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62" satisfied condition "success or failure"
Jan  8 22:44:39.013: INFO: Trying to get logs from node jerma-node pod pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62 container secret-env-test: 
STEP: delete the pod
Jan  8 22:44:39.154: INFO: Waiting for pod pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62 to disappear
Jan  8 22:44:39.160: INFO: Pod pod-secrets-10ebf7cb-370e-4e04-b49f-a3c9f8ad2f62 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:44:39.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9836" for this suite.

• [SLOW TEST:12.362 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4351,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:44:39.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Jan  8 22:44:39.280: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jan  8 22:44:39.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6510'
Jan  8 22:44:39.876: INFO: stderr: ""
Jan  8 22:44:39.877: INFO: stdout: "service/agnhost-slave created\n"
Jan  8 22:44:39.878: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jan  8 22:44:39.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6510'
Jan  8 22:44:40.318: INFO: stderr: ""
Jan  8 22:44:40.318: INFO: stdout: "service/agnhost-master created\n"
Jan  8 22:44:40.320: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  8 22:44:40.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6510'
Jan  8 22:44:40.925: INFO: stderr: ""
Jan  8 22:44:40.925: INFO: stdout: "service/frontend created\n"
Jan  8 22:44:40.926: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jan  8 22:44:40.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6510'
Jan  8 22:44:41.584: INFO: stderr: ""
Jan  8 22:44:41.584: INFO: stdout: "deployment.apps/frontend created\n"
Jan  8 22:44:41.585: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  8 22:44:41.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6510'
Jan  8 22:44:42.141: INFO: stderr: ""
Jan  8 22:44:42.142: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jan  8 22:44:42.143: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  8 22:44:42.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6510'
Jan  8 22:44:43.461: INFO: stderr: ""
Jan  8 22:44:43.461: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan  8 22:44:43.461: INFO: Waiting for all frontend pods to be Running.
Jan  8 22:45:03.514: INFO: Waiting for frontend to serve content.
Jan  8 22:45:03.533: INFO: Trying to add a new entry to the guestbook.
Jan  8 22:45:03.554: INFO: Verifying that added entry can be retrieved.
Jan  8 22:45:03.578: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Jan  8 22:45:08.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6510'
Jan  8 22:45:08.874: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 22:45:08.874: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 22:45:08.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6510'
Jan  8 22:45:09.118: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 22:45:09.118: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 22:45:09.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6510'
Jan  8 22:45:09.325: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 22:45:09.325: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 22:45:09.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6510'
Jan  8 22:45:09.469: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 22:45:09.469: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 22:45:09.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6510'
Jan  8 22:45:09.594: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 22:45:09.594: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 22:45:09.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6510'
Jan  8 22:45:09.794: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 22:45:09.794: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:45:09.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6510" for this suite.

• [SLOW TEST:30.705 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":260,"skipped":4363,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:45:09.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jan  8 22:45:10.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6892'
Jan  8 22:45:11.562: INFO: stderr: ""
Jan  8 22:45:11.562: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan  8 22:45:12.582: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:12.583: INFO: Found 0 / 1
Jan  8 22:45:13.571: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:13.571: INFO: Found 0 / 1
Jan  8 22:45:14.966: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:14.966: INFO: Found 0 / 1
Jan  8 22:45:15.613: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:15.613: INFO: Found 0 / 1
Jan  8 22:45:16.588: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:16.588: INFO: Found 0 / 1
Jan  8 22:45:17.576: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:17.576: INFO: Found 0 / 1
Jan  8 22:45:18.575: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:18.576: INFO: Found 0 / 1
Jan  8 22:45:19.572: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:19.572: INFO: Found 0 / 1
Jan  8 22:45:20.573: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:20.573: INFO: Found 0 / 1
Jan  8 22:45:21.570: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:21.570: INFO: Found 0 / 1
Jan  8 22:45:22.579: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:22.579: INFO: Found 0 / 1
Jan  8 22:45:23.579: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:23.579: INFO: Found 0 / 1
Jan  8 22:45:24.572: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:24.573: INFO: Found 1 / 1
Jan  8 22:45:24.573: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  8 22:45:24.579: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:24.579: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  8 22:45:24.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-j5z7z --namespace=kubectl-6892 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  8 22:45:24.861: INFO: stderr: ""
Jan  8 22:45:24.861: INFO: stdout: "pod/agnhost-master-j5z7z patched\n"
STEP: checking annotations
Jan  8 22:45:24.867: INFO: Selector matched 1 pods for map[app:agnhost]
Jan  8 22:45:24.867: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:45:24.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6892" for this suite.

• [SLOW TEST:14.971 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":261,"skipped":4367,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:45:24.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  8 22:45:24.973: INFO: Waiting up to 5m0s for pod "pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a" in namespace "emptydir-3659" to be "success or failure"
Jan  8 22:45:24.978: INFO: Pod "pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.457109ms
Jan  8 22:45:27.000: INFO: Pod "pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026579221s
Jan  8 22:45:29.007: INFO: Pod "pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032994841s
Jan  8 22:45:31.013: INFO: Pod "pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039707222s
Jan  8 22:45:33.021: INFO: Pod "pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047334484s
STEP: Saw pod success
Jan  8 22:45:33.021: INFO: Pod "pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a" satisfied condition "success or failure"
Jan  8 22:45:33.025: INFO: Trying to get logs from node jerma-node pod pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a container test-container: 
STEP: delete the pod
Jan  8 22:45:33.101: INFO: Waiting for pod pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a to disappear
Jan  8 22:45:33.121: INFO: Pod pod-c77ee193-3bf6-4e0e-96d8-d1026acdca0a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:45:33.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3659" for this suite.

• [SLOW TEST:8.251 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4383,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:45:33.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:45:33.894: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 22:45:35.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:45:37.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:45:39.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120333, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:45:42.970: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:45:44.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7597" for this suite.
STEP: Destroying namespace "webhook-7597-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.376 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":263,"skipped":4383,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:45:44.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-84739d3b-bc0b-4a9c-b73d-ce345c9fb180 in namespace container-probe-1225
Jan  8 22:45:54.984: INFO: Started pod liveness-84739d3b-bc0b-4a9c-b73d-ce345c9fb180 in namespace container-probe-1225
STEP: checking the pod's current state and verifying that restartCount is present
Jan  8 22:45:54.988: INFO: Initial restart count of pod liveness-84739d3b-bc0b-4a9c-b73d-ce345c9fb180 is 0
Jan  8 22:46:17.081: INFO: Restart count of pod container-probe-1225/liveness-84739d3b-bc0b-4a9c-b73d-ce345c9fb180 is now 1 (22.093466017s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:46:17.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1225" for this suite.

• [SLOW TEST:32.653 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4385,"failed":0}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:46:17.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan  8 22:46:17.284: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  8 22:46:17.299: INFO: Waiting for terminating namespaces to be deleted...
Jan  8 22:46:17.302: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan  8 22:46:17.312: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan  8 22:46:17.312: INFO: 	Container weave ready: true, restart count 1
Jan  8 22:46:17.312: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 22:46:17.312: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan  8 22:46:17.312: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 22:46:17.312: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan  8 22:46:17.338: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  8 22:46:17.338: INFO: 	Container kube-scheduler ready: true, restart count 2
Jan  8 22:46:17.338: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  8 22:46:17.338: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan  8 22:46:17.338: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan  8 22:46:17.338: INFO: 	Container etcd ready: true, restart count 1
Jan  8 22:46:17.338: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  8 22:46:17.338: INFO: 	Container coredns ready: true, restart count 0
Jan  8 22:46:17.338: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan  8 22:46:17.338: INFO: 	Container coredns ready: true, restart count 0
Jan  8 22:46:17.338: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan  8 22:46:17.338: INFO: 	Container kube-controller-manager ready: true, restart count 1
Jan  8 22:46:17.338: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan  8 22:46:17.338: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 22:46:17.338: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan  8 22:46:17.338: INFO: 	Container weave ready: true, restart count 0
Jan  8 22:46:17.338: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-595753e2-630d-456c-843a-458cebec1def 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-595753e2-630d-456c-843a-458cebec1def off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-595753e2-630d-456c-843a-458cebec1def
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:46:50.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1053" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:33.766 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":265,"skipped":4385,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:46:50.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:46:51.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3291
I0108 22:46:51.060355       9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3291, replica count: 1
I0108 22:46:52.111659       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:46:53.112116       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:46:54.112571       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:46:55.113141       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:46:56.114021       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 22:46:57.114662       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  8 22:46:57.233: INFO: Created: latency-svc-kkbgs
Jan  8 22:46:57.244: INFO: Got endpoints: latency-svc-kkbgs [29.766418ms]
Jan  8 22:46:57.281: INFO: Created: latency-svc-4dtnc
Jan  8 22:46:57.334: INFO: Got endpoints: latency-svc-4dtnc [88.457106ms]
Jan  8 22:46:57.335: INFO: Created: latency-svc-zgf2z
Jan  8 22:46:57.370: INFO: Got endpoints: latency-svc-zgf2z [124.732241ms]
Jan  8 22:46:57.371: INFO: Created: latency-svc-m2586
Jan  8 22:46:57.377: INFO: Got endpoints: latency-svc-m2586 [131.150253ms]
Jan  8 22:46:57.478: INFO: Created: latency-svc-h7n58
Jan  8 22:46:57.502: INFO: Got endpoints: latency-svc-h7n58 [255.763248ms]
Jan  8 22:46:57.509: INFO: Created: latency-svc-8qqkj
Jan  8 22:46:57.509: INFO: Got endpoints: latency-svc-8qqkj [262.746029ms]
Jan  8 22:46:57.653: INFO: Created: latency-svc-fwcdb
Jan  8 22:46:57.665: INFO: Got endpoints: latency-svc-fwcdb [419.064604ms]
Jan  8 22:46:57.688: INFO: Created: latency-svc-p5cgj
Jan  8 22:46:57.699: INFO: Got endpoints: latency-svc-p5cgj [453.232359ms]
Jan  8 22:46:57.742: INFO: Created: latency-svc-8v5v9
Jan  8 22:46:57.747: INFO: Got endpoints: latency-svc-8v5v9 [501.23425ms]
Jan  8 22:46:57.813: INFO: Created: latency-svc-ktc8c
Jan  8 22:46:57.818: INFO: Got endpoints: latency-svc-ktc8c [572.08917ms]
Jan  8 22:46:57.991: INFO: Created: latency-svc-mcq9w
Jan  8 22:46:58.024: INFO: Got endpoints: latency-svc-mcq9w [778.748544ms]
Jan  8 22:46:58.025: INFO: Created: latency-svc-ntfnt
Jan  8 22:46:58.049: INFO: Got endpoints: latency-svc-ntfnt [802.432993ms]
Jan  8 22:46:58.051: INFO: Created: latency-svc-mh4rg
Jan  8 22:46:58.061: INFO: Got endpoints: latency-svc-mh4rg [814.811656ms]
Jan  8 22:46:58.141: INFO: Created: latency-svc-mdxhq
Jan  8 22:46:58.168: INFO: Created: latency-svc-rq7ch
Jan  8 22:46:58.171: INFO: Got endpoints: latency-svc-mdxhq [925.274838ms]
Jan  8 22:46:58.185: INFO: Got endpoints: latency-svc-rq7ch [939.028589ms]
Jan  8 22:46:58.214: INFO: Created: latency-svc-ndcdd
Jan  8 22:46:58.217: INFO: Got endpoints: latency-svc-ndcdd [971.32113ms]
Jan  8 22:46:58.372: INFO: Created: latency-svc-sv95x
Jan  8 22:46:58.413: INFO: Got endpoints: latency-svc-sv95x [1.079540169s]
Jan  8 22:46:58.416: INFO: Created: latency-svc-96tpd
Jan  8 22:46:58.439: INFO: Got endpoints: latency-svc-96tpd [1.069841886s]
Jan  8 22:46:58.568: INFO: Created: latency-svc-77bzj
Jan  8 22:46:58.573: INFO: Got endpoints: latency-svc-77bzj [1.195736955s]
Jan  8 22:46:58.651: INFO: Created: latency-svc-grcj7
Jan  8 22:46:58.737: INFO: Got endpoints: latency-svc-grcj7 [1.234837894s]
Jan  8 22:46:58.738: INFO: Created: latency-svc-69qs4
Jan  8 22:46:58.753: INFO: Got endpoints: latency-svc-69qs4 [1.243917348s]
Jan  8 22:46:58.773: INFO: Created: latency-svc-5c5xq
Jan  8 22:46:58.776: INFO: Got endpoints: latency-svc-5c5xq [1.111553602s]
Jan  8 22:46:58.799: INFO: Created: latency-svc-mmwts
Jan  8 22:46:58.809: INFO: Got endpoints: latency-svc-mmwts [1.109655653s]
Jan  8 22:46:58.915: INFO: Created: latency-svc-zxncn
Jan  8 22:46:58.978: INFO: Got endpoints: latency-svc-zxncn [1.23051296s]
Jan  8 22:46:59.087: INFO: Created: latency-svc-mtx4d
Jan  8 22:46:59.126: INFO: Got endpoints: latency-svc-mtx4d [1.307483094s]
Jan  8 22:46:59.130: INFO: Created: latency-svc-52rwf
Jan  8 22:46:59.168: INFO: Got endpoints: latency-svc-52rwf [1.143429477s]
Jan  8 22:46:59.250: INFO: Created: latency-svc-2fjt6
Jan  8 22:46:59.260: INFO: Got endpoints: latency-svc-2fjt6 [1.210620314s]
Jan  8 22:46:59.300: INFO: Created: latency-svc-2msjj
Jan  8 22:46:59.302: INFO: Got endpoints: latency-svc-2msjj [1.240530085s]
Jan  8 22:46:59.423: INFO: Created: latency-svc-7fmg2
Jan  8 22:46:59.428: INFO: Got endpoints: latency-svc-7fmg2 [1.257414984s]
Jan  8 22:46:59.492: INFO: Created: latency-svc-9gkml
Jan  8 22:46:59.503: INFO: Got endpoints: latency-svc-9gkml [1.318355169s]
Jan  8 22:46:59.577: INFO: Created: latency-svc-kcck5
Jan  8 22:46:59.587: INFO: Got endpoints: latency-svc-kcck5 [1.369507831s]
Jan  8 22:46:59.665: INFO: Created: latency-svc-cssqp
Jan  8 22:46:59.788: INFO: Got endpoints: latency-svc-cssqp [1.374833171s]
Jan  8 22:46:59.877: INFO: Created: latency-svc-z8zn6
Jan  8 22:46:59.880: INFO: Got endpoints: latency-svc-z8zn6 [1.439966982s]
Jan  8 22:46:59.967: INFO: Created: latency-svc-lg7f8
Jan  8 22:46:59.980: INFO: Got endpoints: latency-svc-lg7f8 [1.407317204s]
Jan  8 22:47:00.115: INFO: Created: latency-svc-vdkb2
Jan  8 22:47:00.136: INFO: Got endpoints: latency-svc-vdkb2 [1.398901298s]
Jan  8 22:47:00.169: INFO: Created: latency-svc-t87cc
Jan  8 22:47:00.176: INFO: Got endpoints: latency-svc-t87cc [1.423226578s]
Jan  8 22:47:00.205: INFO: Created: latency-svc-4pjfl
Jan  8 22:47:00.292: INFO: Got endpoints: latency-svc-4pjfl [1.515823295s]
Jan  8 22:47:00.337: INFO: Created: latency-svc-b4d5c
Jan  8 22:47:00.348: INFO: Got endpoints: latency-svc-b4d5c [1.538590868s]
Jan  8 22:47:00.469: INFO: Created: latency-svc-jtcxk
Jan  8 22:47:00.507: INFO: Got endpoints: latency-svc-jtcxk [1.529289877s]
Jan  8 22:47:00.555: INFO: Created: latency-svc-rjlpk
Jan  8 22:47:00.692: INFO: Got endpoints: latency-svc-rjlpk [1.565659405s]
Jan  8 22:47:00.697: INFO: Created: latency-svc-kfz9d
Jan  8 22:47:00.708: INFO: Got endpoints: latency-svc-kfz9d [199.897583ms]
Jan  8 22:47:00.730: INFO: Created: latency-svc-2dm8z
Jan  8 22:47:00.756: INFO: Got endpoints: latency-svc-2dm8z [1.587990275s]
Jan  8 22:47:00.842: INFO: Created: latency-svc-d4zct
Jan  8 22:47:00.857: INFO: Got endpoints: latency-svc-d4zct [1.597208678s]
Jan  8 22:47:00.881: INFO: Created: latency-svc-stjsj
Jan  8 22:47:00.890: INFO: Got endpoints: latency-svc-stjsj [1.588110009s]
Jan  8 22:47:00.925: INFO: Created: latency-svc-dt8kr
Jan  8 22:47:00.928: INFO: Got endpoints: latency-svc-dt8kr [1.499829942s]
Jan  8 22:47:00.996: INFO: Created: latency-svc-fg7jx
Jan  8 22:47:01.011: INFO: Got endpoints: latency-svc-fg7jx [1.507287132s]
Jan  8 22:47:01.063: INFO: Created: latency-svc-jt7s6
Jan  8 22:47:01.178: INFO: Got endpoints: latency-svc-jt7s6 [1.591154943s]
Jan  8 22:47:01.185: INFO: Created: latency-svc-npfgj
Jan  8 22:47:01.198: INFO: Got endpoints: latency-svc-npfgj [1.40984331s]
Jan  8 22:47:01.245: INFO: Created: latency-svc-d7tff
Jan  8 22:47:01.249: INFO: Got endpoints: latency-svc-d7tff [1.369323894s]
Jan  8 22:47:01.454: INFO: Created: latency-svc-bb22p
Jan  8 22:47:01.454: INFO: Created: latency-svc-8tznr
Jan  8 22:47:01.466: INFO: Got endpoints: latency-svc-8tznr [1.485934272s]
Jan  8 22:47:01.468: INFO: Got endpoints: latency-svc-bb22p [1.332032087s]
Jan  8 22:47:01.571: INFO: Created: latency-svc-wkqkh
Jan  8 22:47:01.590: INFO: Got endpoints: latency-svc-wkqkh [1.413652221s]
Jan  8 22:47:01.596: INFO: Created: latency-svc-nq4j4
Jan  8 22:47:01.604: INFO: Got endpoints: latency-svc-nq4j4 [1.311064527s]
Jan  8 22:47:01.627: INFO: Created: latency-svc-j9csl
Jan  8 22:47:01.635: INFO: Got endpoints: latency-svc-j9csl [1.287153555s]
Jan  8 22:47:01.663: INFO: Created: latency-svc-ctd82
Jan  8 22:47:01.787: INFO: Got endpoints: latency-svc-ctd82 [1.094975992s]
Jan  8 22:47:01.846: INFO: Created: latency-svc-n84xv
Jan  8 22:47:01.861: INFO: Got endpoints: latency-svc-n84xv [1.153369054s]
Jan  8 22:47:01.935: INFO: Created: latency-svc-8rvlm
Jan  8 22:47:01.940: INFO: Got endpoints: latency-svc-8rvlm [1.183456923s]
Jan  8 22:47:01.975: INFO: Created: latency-svc-rvnhn
Jan  8 22:47:01.978: INFO: Got endpoints: latency-svc-rvnhn [1.120907573s]
Jan  8 22:47:02.081: INFO: Created: latency-svc-wb4w4
Jan  8 22:47:02.110: INFO: Got endpoints: latency-svc-wb4w4 [1.219806926s]
Jan  8 22:47:02.112: INFO: Created: latency-svc-xm9n8
Jan  8 22:47:02.151: INFO: Created: latency-svc-jhhfg
Jan  8 22:47:02.151: INFO: Got endpoints: latency-svc-xm9n8 [1.223105071s]
Jan  8 22:47:02.158: INFO: Got endpoints: latency-svc-jhhfg [1.147331333s]
Jan  8 22:47:02.263: INFO: Created: latency-svc-xndzv
Jan  8 22:47:02.281: INFO: Got endpoints: latency-svc-xndzv [1.103060374s]
Jan  8 22:47:02.324: INFO: Created: latency-svc-wvqcm
Jan  8 22:47:02.332: INFO: Got endpoints: latency-svc-wvqcm [1.133137893s]
Jan  8 22:47:02.453: INFO: Created: latency-svc-ntj8b
Jan  8 22:47:02.493: INFO: Got endpoints: latency-svc-ntj8b [1.244211882s]
Jan  8 22:47:02.495: INFO: Created: latency-svc-fm4fv
Jan  8 22:47:02.508: INFO: Got endpoints: latency-svc-fm4fv [1.041666332s]
Jan  8 22:47:02.656: INFO: Created: latency-svc-tmcsl
Jan  8 22:47:02.682: INFO: Got endpoints: latency-svc-tmcsl [1.21459547s]
Jan  8 22:47:02.687: INFO: Created: latency-svc-vj5kj
Jan  8 22:47:02.706: INFO: Got endpoints: latency-svc-vj5kj [1.116211662s]
Jan  8 22:47:02.755: INFO: Created: latency-svc-mcm9q
Jan  8 22:47:02.803: INFO: Got endpoints: latency-svc-mcm9q [1.1991628s]
Jan  8 22:47:02.808: INFO: Created: latency-svc-m56x6
Jan  8 22:47:02.813: INFO: Got endpoints: latency-svc-m56x6 [1.178579934s]
Jan  8 22:47:02.858: INFO: Created: latency-svc-5z4m6
Jan  8 22:47:02.858: INFO: Got endpoints: latency-svc-5z4m6 [1.070462047s]
Jan  8 22:47:02.940: INFO: Created: latency-svc-7ckwr
Jan  8 22:47:02.941: INFO: Got endpoints: latency-svc-7ckwr [1.079499233s]
Jan  8 22:47:02.969: INFO: Created: latency-svc-fpxws
Jan  8 22:47:02.976: INFO: Got endpoints: latency-svc-fpxws [1.036307083s]
Jan  8 22:47:03.004: INFO: Created: latency-svc-wcxc6
Jan  8 22:47:03.013: INFO: Got endpoints: latency-svc-wcxc6 [1.034593493s]
Jan  8 22:47:03.087: INFO: Created: latency-svc-s8bbh
Jan  8 22:47:03.089: INFO: Got endpoints: latency-svc-s8bbh [978.266927ms]
Jan  8 22:47:03.128: INFO: Created: latency-svc-rjnp9
Jan  8 22:47:03.131: INFO: Got endpoints: latency-svc-rjnp9 [979.915987ms]
Jan  8 22:47:03.269: INFO: Created: latency-svc-swmnz
Jan  8 22:47:03.278: INFO: Got endpoints: latency-svc-swmnz [1.119335588s]
Jan  8 22:47:03.328: INFO: Created: latency-svc-xnjsj
Jan  8 22:47:03.352: INFO: Got endpoints: latency-svc-xnjsj [1.070469063s]
Jan  8 22:47:03.356: INFO: Created: latency-svc-zppw4
Jan  8 22:47:03.410: INFO: Got endpoints: latency-svc-zppw4 [1.078192368s]
Jan  8 22:47:03.486: INFO: Created: latency-svc-bjdld
Jan  8 22:47:03.490: INFO: Got endpoints: latency-svc-bjdld [996.244346ms]
Jan  8 22:47:03.541: INFO: Created: latency-svc-w8dqc
Jan  8 22:47:03.554: INFO: Got endpoints: latency-svc-w8dqc [1.04543622s]
Jan  8 22:47:03.571: INFO: Created: latency-svc-vf2b9
Jan  8 22:47:03.581: INFO: Got endpoints: latency-svc-vf2b9 [898.544932ms]
Jan  8 22:47:03.608: INFO: Created: latency-svc-m8k59
Jan  8 22:47:03.640: INFO: Got endpoints: latency-svc-m8k59 [933.228598ms]
Jan  8 22:47:03.872: INFO: Created: latency-svc-nvpfg
Jan  8 22:47:03.963: INFO: Created: latency-svc-mtw5t
Jan  8 22:47:03.970: INFO: Got endpoints: latency-svc-nvpfg [1.167037182s]
Jan  8 22:47:04.000: INFO: Got endpoints: latency-svc-mtw5t [1.186177666s]
Jan  8 22:47:04.000: INFO: Created: latency-svc-9t8mc
Jan  8 22:47:04.036: INFO: Created: latency-svc-6cnlk
Jan  8 22:47:04.040: INFO: Got endpoints: latency-svc-9t8mc [1.181463739s]
Jan  8 22:47:04.044: INFO: Got endpoints: latency-svc-6cnlk [1.102756096s]
Jan  8 22:47:04.140: INFO: Created: latency-svc-2nk7k
Jan  8 22:47:04.152: INFO: Got endpoints: latency-svc-2nk7k [1.175994796s]
Jan  8 22:47:04.172: INFO: Created: latency-svc-dp8n4
Jan  8 22:47:04.174: INFO: Got endpoints: latency-svc-dp8n4 [1.160286928s]
Jan  8 22:47:04.274: INFO: Created: latency-svc-q2mfr
Jan  8 22:47:04.276: INFO: Got endpoints: latency-svc-q2mfr [1.187808364s]
Jan  8 22:47:04.319: INFO: Created: latency-svc-bfnnh
Jan  8 22:47:04.343: INFO: Got endpoints: latency-svc-bfnnh [1.210975989s]
Jan  8 22:47:04.476: INFO: Created: latency-svc-94lsf
Jan  8 22:47:04.484: INFO: Got endpoints: latency-svc-94lsf [1.206281872s]
Jan  8 22:47:04.517: INFO: Created: latency-svc-qk6bc
Jan  8 22:47:04.522: INFO: Got endpoints: latency-svc-qk6bc [1.169875987s]
Jan  8 22:47:04.554: INFO: Created: latency-svc-cfnc2
Jan  8 22:47:04.560: INFO: Got endpoints: latency-svc-cfnc2 [1.150147865s]
Jan  8 22:47:04.608: INFO: Created: latency-svc-pq2fb
Jan  8 22:47:04.634: INFO: Created: latency-svc-bm2cg
Jan  8 22:47:04.636: INFO: Got endpoints: latency-svc-pq2fb [1.145403307s]
Jan  8 22:47:04.641: INFO: Got endpoints: latency-svc-bm2cg [1.086684866s]
Jan  8 22:47:04.664: INFO: Created: latency-svc-4hslm
Jan  8 22:47:04.671: INFO: Got endpoints: latency-svc-4hslm [1.090157453s]
Jan  8 22:47:04.704: INFO: Created: latency-svc-58sns
Jan  8 22:47:04.731: INFO: Got endpoints: latency-svc-58sns [1.091390536s]
Jan  8 22:47:04.750: INFO: Created: latency-svc-t9prz
Jan  8 22:47:04.758: INFO: Got endpoints: latency-svc-t9prz [788.259155ms]
Jan  8 22:47:04.805: INFO: Created: latency-svc-7tspd
Jan  8 22:47:04.816: INFO: Got endpoints: latency-svc-7tspd [815.982198ms]
Jan  8 22:47:04.908: INFO: Created: latency-svc-vpx9r
Jan  8 22:47:04.908: INFO: Got endpoints: latency-svc-vpx9r [868.319161ms]
Jan  8 22:47:04.933: INFO: Created: latency-svc-tpvj5
Jan  8 22:47:04.947: INFO: Got endpoints: latency-svc-tpvj5 [903.054661ms]
Jan  8 22:47:04.978: INFO: Created: latency-svc-9l8f8
Jan  8 22:47:04.978: INFO: Got endpoints: latency-svc-9l8f8 [825.874866ms]
Jan  8 22:47:05.021: INFO: Created: latency-svc-rrhxs
Jan  8 22:47:05.023: INFO: Got endpoints: latency-svc-rrhxs [849.519506ms]
Jan  8 22:47:05.073: INFO: Created: latency-svc-qgg6n
Jan  8 22:47:05.087: INFO: Got endpoints: latency-svc-qgg6n [810.263858ms]
Jan  8 22:47:05.105: INFO: Created: latency-svc-54pmn
Jan  8 22:47:05.242: INFO: Got endpoints: latency-svc-54pmn [899.097361ms]
Jan  8 22:47:05.247: INFO: Created: latency-svc-4gr2q
Jan  8 22:47:05.258: INFO: Got endpoints: latency-svc-4gr2q [773.656355ms]
Jan  8 22:47:05.302: INFO: Created: latency-svc-ckhrk
Jan  8 22:47:05.308: INFO: Got endpoints: latency-svc-ckhrk [786.029548ms]
Jan  8 22:47:05.332: INFO: Created: latency-svc-4qd9w
Jan  8 22:47:05.385: INFO: Got endpoints: latency-svc-4qd9w [824.578501ms]
Jan  8 22:47:05.397: INFO: Created: latency-svc-2vgcs
Jan  8 22:47:05.403: INFO: Got endpoints: latency-svc-2vgcs [766.805331ms]
Jan  8 22:47:05.429: INFO: Created: latency-svc-hkw6k
Jan  8 22:47:05.444: INFO: Got endpoints: latency-svc-hkw6k [802.895521ms]
Jan  8 22:47:05.463: INFO: Created: latency-svc-9cf7f
Jan  8 22:47:05.466: INFO: Got endpoints: latency-svc-9cf7f [794.857924ms]
Jan  8 22:47:05.517: INFO: Created: latency-svc-n4vhp
Jan  8 22:47:05.529: INFO: Got endpoints: latency-svc-n4vhp [798.038412ms]
Jan  8 22:47:05.548: INFO: Created: latency-svc-mvqmv
Jan  8 22:47:05.564: INFO: Got endpoints: latency-svc-mvqmv [805.190281ms]
Jan  8 22:47:05.566: INFO: Created: latency-svc-zwznx
Jan  8 22:47:05.591: INFO: Got endpoints: latency-svc-zwznx [774.691014ms]
Jan  8 22:47:05.610: INFO: Created: latency-svc-mrvhs
Jan  8 22:47:05.611: INFO: Got endpoints: latency-svc-mrvhs [703.203281ms]
Jan  8 22:47:05.661: INFO: Created: latency-svc-7h5pq
Jan  8 22:47:05.673: INFO: Got endpoints: latency-svc-7h5pq [725.692662ms]
Jan  8 22:47:05.703: INFO: Created: latency-svc-x4642
Jan  8 22:47:05.721: INFO: Got endpoints: latency-svc-x4642 [742.251449ms]
Jan  8 22:47:05.754: INFO: Created: latency-svc-d9dd2
Jan  8 22:47:05.803: INFO: Got endpoints: latency-svc-d9dd2 [779.559918ms]
Jan  8 22:47:05.830: INFO: Created: latency-svc-9zk4s
Jan  8 22:47:05.838: INFO: Got endpoints: latency-svc-9zk4s [750.703149ms]
Jan  8 22:47:05.860: INFO: Created: latency-svc-k7bg6
Jan  8 22:47:05.989: INFO: Got endpoints: latency-svc-k7bg6 [747.107872ms]
Jan  8 22:47:06.036: INFO: Created: latency-svc-sfkp5
Jan  8 22:47:06.079: INFO: Got endpoints: latency-svc-sfkp5 [821.503996ms]
Jan  8 22:47:06.192: INFO: Created: latency-svc-wt8th
Jan  8 22:47:06.192: INFO: Got endpoints: latency-svc-wt8th [883.770057ms]
Jan  8 22:47:06.235: INFO: Created: latency-svc-bwlbj
Jan  8 22:47:06.237: INFO: Got endpoints: latency-svc-bwlbj [851.498353ms]
Jan  8 22:47:06.285: INFO: Created: latency-svc-rkfkh
Jan  8 22:47:06.316: INFO: Got endpoints: latency-svc-rkfkh [913.124367ms]
Jan  8 22:47:06.358: INFO: Created: latency-svc-545sq
Jan  8 22:47:06.370: INFO: Got endpoints: latency-svc-545sq [926.286631ms]
Jan  8 22:47:06.512: INFO: Created: latency-svc-8vm4d
Jan  8 22:47:06.515: INFO: Got endpoints: latency-svc-8vm4d [1.048926895s]
Jan  8 22:47:06.533: INFO: Created: latency-svc-sfkms
Jan  8 22:47:06.560: INFO: Got endpoints: latency-svc-sfkms [1.030175249s]
Jan  8 22:47:06.583: INFO: Created: latency-svc-dzj75
Jan  8 22:47:06.596: INFO: Got endpoints: latency-svc-dzj75 [1.031741705s]
Jan  8 22:47:06.627: INFO: Created: latency-svc-gr46v
Jan  8 22:47:06.636: INFO: Got endpoints: latency-svc-gr46v [1.045053131s]
Jan  8 22:47:06.672: INFO: Created: latency-svc-4rqbc
Jan  8 22:47:06.686: INFO: Got endpoints: latency-svc-4rqbc [1.073961236s]
Jan  8 22:47:06.712: INFO: Created: latency-svc-vx4fs
Jan  8 22:47:06.717: INFO: Got endpoints: latency-svc-vx4fs [1.044049466s]
Jan  8 22:47:06.768: INFO: Created: latency-svc-ctpcb
Jan  8 22:47:06.807: INFO: Got endpoints: latency-svc-ctpcb [1.085914087s]
Jan  8 22:47:06.807: INFO: Created: latency-svc-vs6p6
Jan  8 22:47:06.830: INFO: Got endpoints: latency-svc-vs6p6 [1.026313266s]
Jan  8 22:47:06.856: INFO: Created: latency-svc-m5g5k
Jan  8 22:47:06.863: INFO: Got endpoints: latency-svc-m5g5k [1.02527125s]
Jan  8 22:47:06.900: INFO: Created: latency-svc-9zf57
Jan  8 22:47:06.907: INFO: Got endpoints: latency-svc-9zf57 [917.999682ms]
Jan  8 22:47:06.925: INFO: Created: latency-svc-62twv
Jan  8 22:47:06.945: INFO: Got endpoints: latency-svc-62twv [865.01043ms]
Jan  8 22:47:06.963: INFO: Created: latency-svc-7vbmp
Jan  8 22:47:06.966: INFO: Got endpoints: latency-svc-7vbmp [773.761743ms]
Jan  8 22:47:06.988: INFO: Created: latency-svc-5tw6v
Jan  8 22:47:07.037: INFO: Got endpoints: latency-svc-5tw6v [799.844878ms]
Jan  8 22:47:07.040: INFO: Created: latency-svc-h97pb
Jan  8 22:47:07.081: INFO: Got endpoints: latency-svc-h97pb [764.302452ms]
Jan  8 22:47:07.083: INFO: Created: latency-svc-zlxj2
Jan  8 22:47:07.100: INFO: Got endpoints: latency-svc-zlxj2 [730.103297ms]
Jan  8 22:47:07.234: INFO: Created: latency-svc-pwljp
Jan  8 22:47:07.239: INFO: Got endpoints: latency-svc-pwljp [722.995541ms]
Jan  8 22:47:07.289: INFO: Created: latency-svc-jr9xm
Jan  8 22:47:07.295: INFO: Got endpoints: latency-svc-jr9xm [734.661666ms]
Jan  8 22:47:07.326: INFO: Created: latency-svc-6cq49
Jan  8 22:47:07.362: INFO: Got endpoints: latency-svc-6cq49 [766.67745ms]
Jan  8 22:47:07.384: INFO: Created: latency-svc-65hq8
Jan  8 22:47:07.417: INFO: Got endpoints: latency-svc-65hq8 [780.568979ms]
Jan  8 22:47:07.419: INFO: Created: latency-svc-c8pcp
Jan  8 22:47:07.426: INFO: Got endpoints: latency-svc-c8pcp [740.017493ms]
Jan  8 22:47:07.487: INFO: Created: latency-svc-j97hb
Jan  8 22:47:07.494: INFO: Got endpoints: latency-svc-j97hb [776.9793ms]
Jan  8 22:47:07.523: INFO: Created: latency-svc-vjggg
Jan  8 22:47:07.526: INFO: Got endpoints: latency-svc-vjggg [718.966518ms]
Jan  8 22:47:07.541: INFO: Created: latency-svc-nwvnb
Jan  8 22:47:07.544: INFO: Got endpoints: latency-svc-nwvnb [713.994848ms]
Jan  8 22:47:07.568: INFO: Created: latency-svc-q57kb
Jan  8 22:47:07.578: INFO: Got endpoints: latency-svc-q57kb [715.294579ms]
Jan  8 22:47:07.582: INFO: Created: latency-svc-swcpd
Jan  8 22:47:07.630: INFO: Got endpoints: latency-svc-swcpd [722.279393ms]
Jan  8 22:47:07.649: INFO: Created: latency-svc-f5f5b
Jan  8 22:47:07.657: INFO: Got endpoints: latency-svc-f5f5b [712.013008ms]
Jan  8 22:47:07.685: INFO: Created: latency-svc-vg676
Jan  8 22:47:07.690: INFO: Got endpoints: latency-svc-vg676 [723.789584ms]
Jan  8 22:47:07.717: INFO: Created: latency-svc-9cmc5
Jan  8 22:47:07.717: INFO: Got endpoints: latency-svc-9cmc5 [679.878081ms]
Jan  8 22:47:07.795: INFO: Created: latency-svc-tjj5g
Jan  8 22:47:07.799: INFO: Got endpoints: latency-svc-tjj5g [717.947743ms]
Jan  8 22:47:07.824: INFO: Created: latency-svc-r4fxr
Jan  8 22:47:07.834: INFO: Got endpoints: latency-svc-r4fxr [733.687197ms]
Jan  8 22:47:07.885: INFO: Created: latency-svc-87xb7
Jan  8 22:47:07.922: INFO: Got endpoints: latency-svc-87xb7 [683.41611ms]
Jan  8 22:47:07.949: INFO: Created: latency-svc-xnmvb
Jan  8 22:47:07.969: INFO: Got endpoints: latency-svc-xnmvb [674.381629ms]
Jan  8 22:47:07.981: INFO: Created: latency-svc-qw6n7
Jan  8 22:47:08.055: INFO: Got endpoints: latency-svc-qw6n7 [692.28997ms]
Jan  8 22:47:08.057: INFO: Created: latency-svc-zt7rd
Jan  8 22:47:08.065: INFO: Got endpoints: latency-svc-zt7rd [648.754855ms]
Jan  8 22:47:08.092: INFO: Created: latency-svc-ngp7f
Jan  8 22:47:08.109: INFO: Got endpoints: latency-svc-ngp7f [683.54363ms]
Jan  8 22:47:08.113: INFO: Created: latency-svc-sdnt9
Jan  8 22:47:08.124: INFO: Got endpoints: latency-svc-sdnt9 [630.087597ms]
Jan  8 22:47:08.145: INFO: Created: latency-svc-6jqcr
Jan  8 22:47:08.193: INFO: Got endpoints: latency-svc-6jqcr [666.465036ms]
Jan  8 22:47:08.198: INFO: Created: latency-svc-pnknb
Jan  8 22:47:08.221: INFO: Got endpoints: latency-svc-pnknb [677.645066ms]
Jan  8 22:47:08.227: INFO: Created: latency-svc-kzkv7
Jan  8 22:47:08.240: INFO: Got endpoints: latency-svc-kzkv7 [661.552056ms]
Jan  8 22:47:08.266: INFO: Created: latency-svc-69bzz
Jan  8 22:47:08.271: INFO: Got endpoints: latency-svc-69bzz [640.694001ms]
Jan  8 22:47:08.430: INFO: Created: latency-svc-4cdww
Jan  8 22:47:08.442: INFO: Got endpoints: latency-svc-4cdww [785.44414ms]
Jan  8 22:47:08.475: INFO: Created: latency-svc-6dhnl
Jan  8 22:47:08.491: INFO: Got endpoints: latency-svc-6dhnl [801.563754ms]
Jan  8 22:47:08.511: INFO: Created: latency-svc-6hp26
Jan  8 22:47:08.516: INFO: Got endpoints: latency-svc-6hp26 [799.363152ms]
Jan  8 22:47:08.580: INFO: Created: latency-svc-72226
Jan  8 22:47:08.580: INFO: Got endpoints: latency-svc-72226 [781.084945ms]
Jan  8 22:47:08.609: INFO: Created: latency-svc-knj8r
Jan  8 22:47:08.617: INFO: Got endpoints: latency-svc-knj8r [782.504398ms]
Jan  8 22:47:08.651: INFO: Created: latency-svc-s7tlz
Jan  8 22:47:08.658: INFO: Got endpoints: latency-svc-s7tlz [735.442154ms]
Jan  8 22:47:08.723: INFO: Created: latency-svc-6dnqp
Jan  8 22:47:08.746: INFO: Got endpoints: latency-svc-6dnqp [776.316148ms]
Jan  8 22:47:08.780: INFO: Created: latency-svc-ncmbm
Jan  8 22:47:08.803: INFO: Got endpoints: latency-svc-ncmbm [747.835541ms]
Jan  8 22:47:08.915: INFO: Created: latency-svc-z9q8x
Jan  8 22:47:08.928: INFO: Got endpoints: latency-svc-z9q8x [862.677899ms]
Jan  8 22:47:08.989: INFO: Created: latency-svc-g2rkq
Jan  8 22:47:09.123: INFO: Got endpoints: latency-svc-g2rkq [1.013943424s]
Jan  8 22:47:09.131: INFO: Created: latency-svc-flcjq
Jan  8 22:47:09.144: INFO: Got endpoints: latency-svc-flcjq [1.019149288s]
Jan  8 22:47:09.219: INFO: Created: latency-svc-8krc5
Jan  8 22:47:09.412: INFO: Got endpoints: latency-svc-8krc5 [1.218438355s]
Jan  8 22:47:09.462: INFO: Created: latency-svc-nh8qw
Jan  8 22:47:09.477: INFO: Got endpoints: latency-svc-nh8qw [1.255312453s]
Jan  8 22:47:09.589: INFO: Created: latency-svc-xpb4h
Jan  8 22:47:09.601: INFO: Got endpoints: latency-svc-xpb4h [1.361121046s]
Jan  8 22:47:09.668: INFO: Created: latency-svc-gm2dj
Jan  8 22:47:09.675: INFO: Got endpoints: latency-svc-gm2dj [1.403606259s]
Jan  8 22:47:09.720: INFO: Created: latency-svc-k8tdb
Jan  8 22:47:09.744: INFO: Got endpoints: latency-svc-k8tdb [1.301772829s]
Jan  8 22:47:09.773: INFO: Created: latency-svc-stgml
Jan  8 22:47:09.777: INFO: Got endpoints: latency-svc-stgml [1.285470113s]
Jan  8 22:47:09.810: INFO: Created: latency-svc-q8tpf
Jan  8 22:47:09.813: INFO: Got endpoints: latency-svc-q8tpf [1.297042834s]
Jan  8 22:47:09.861: INFO: Created: latency-svc-kf5nt
Jan  8 22:47:09.865: INFO: Got endpoints: latency-svc-kf5nt [1.284906409s]
Jan  8 22:47:09.890: INFO: Created: latency-svc-k69b9
Jan  8 22:47:09.894: INFO: Got endpoints: latency-svc-k69b9 [1.277417522s]
Jan  8 22:47:09.913: INFO: Created: latency-svc-fwbcl
Jan  8 22:47:09.918: INFO: Got endpoints: latency-svc-fwbcl [1.260259069s]
Jan  8 22:47:09.936: INFO: Created: latency-svc-668sm
Jan  8 22:47:09.943: INFO: Got endpoints: latency-svc-668sm [1.196590171s]
Jan  8 22:47:09.991: INFO: Created: latency-svc-4hnv4
Jan  8 22:47:10.006: INFO: Got endpoints: latency-svc-4hnv4 [1.203474891s]
Jan  8 22:47:10.010: INFO: Created: latency-svc-wskbv
Jan  8 22:47:10.015: INFO: Got endpoints: latency-svc-wskbv [1.086455823s]
Jan  8 22:47:10.037: INFO: Created: latency-svc-nhgls
Jan  8 22:47:10.044: INFO: Got endpoints: latency-svc-nhgls [919.980789ms]
Jan  8 22:47:10.062: INFO: Created: latency-svc-tk225
Jan  8 22:47:10.069: INFO: Got endpoints: latency-svc-tk225 [924.930785ms]
Jan  8 22:47:10.125: INFO: Created: latency-svc-248g9
Jan  8 22:47:10.127: INFO: Got endpoints: latency-svc-248g9 [715.650582ms]
Jan  8 22:47:10.166: INFO: Created: latency-svc-hb64m
Jan  8 22:47:10.180: INFO: Got endpoints: latency-svc-hb64m [703.270995ms]
Jan  8 22:47:10.183: INFO: Created: latency-svc-7nlch
Jan  8 22:47:10.201: INFO: Got endpoints: latency-svc-7nlch [599.465374ms]
Jan  8 22:47:10.220: INFO: Created: latency-svc-hc5kc
Jan  8 22:47:10.273: INFO: Got endpoints: latency-svc-hc5kc [598.410497ms]
Jan  8 22:47:10.310: INFO: Created: latency-svc-q72q9
Jan  8 22:47:10.322: INFO: Got endpoints: latency-svc-q72q9 [577.764449ms]
Jan  8 22:47:10.343: INFO: Created: latency-svc-5hq2j
Jan  8 22:47:10.355: INFO: Got endpoints: latency-svc-5hq2j [577.759762ms]
Jan  8 22:47:10.445: INFO: Created: latency-svc-sjhzn
Jan  8 22:47:10.489: INFO: Got endpoints: latency-svc-sjhzn [675.297619ms]
Jan  8 22:47:10.492: INFO: Created: latency-svc-kll4s
Jan  8 22:47:10.499: INFO: Got endpoints: latency-svc-kll4s [633.520555ms]
Jan  8 22:47:10.599: INFO: Created: latency-svc-bz5lt
Jan  8 22:47:10.601: INFO: Got endpoints: latency-svc-bz5lt [706.776749ms]
Jan  8 22:47:10.641: INFO: Created: latency-svc-xk54w
Jan  8 22:47:10.643: INFO: Got endpoints: latency-svc-xk54w [724.367344ms]
Jan  8 22:47:10.643: INFO: Latencies: [88.457106ms 124.732241ms 131.150253ms 199.897583ms 255.763248ms 262.746029ms 419.064604ms 453.232359ms 501.23425ms 572.08917ms 577.759762ms 577.764449ms 598.410497ms 599.465374ms 630.087597ms 633.520555ms 640.694001ms 648.754855ms 661.552056ms 666.465036ms 674.381629ms 675.297619ms 677.645066ms 679.878081ms 683.41611ms 683.54363ms 692.28997ms 703.203281ms 703.270995ms 706.776749ms 712.013008ms 713.994848ms 715.294579ms 715.650582ms 717.947743ms 718.966518ms 722.279393ms 722.995541ms 723.789584ms 724.367344ms 725.692662ms 730.103297ms 733.687197ms 734.661666ms 735.442154ms 740.017493ms 742.251449ms 747.107872ms 747.835541ms 750.703149ms 764.302452ms 766.67745ms 766.805331ms 773.656355ms 773.761743ms 774.691014ms 776.316148ms 776.9793ms 778.748544ms 779.559918ms 780.568979ms 781.084945ms 782.504398ms 785.44414ms 786.029548ms 788.259155ms 794.857924ms 798.038412ms 799.363152ms 799.844878ms 801.563754ms 802.432993ms 802.895521ms 805.190281ms 810.263858ms 814.811656ms 815.982198ms 821.503996ms 824.578501ms 825.874866ms 849.519506ms 851.498353ms 862.677899ms 865.01043ms 868.319161ms 883.770057ms 898.544932ms 899.097361ms 903.054661ms 913.124367ms 917.999682ms 919.980789ms 924.930785ms 925.274838ms 926.286631ms 933.228598ms 939.028589ms 971.32113ms 978.266927ms 979.915987ms 996.244346ms 1.013943424s 1.019149288s 1.02527125s 1.026313266s 1.030175249s 1.031741705s 1.034593493s 1.036307083s 1.041666332s 1.044049466s 1.045053131s 1.04543622s 1.048926895s 1.069841886s 1.070462047s 1.070469063s 1.073961236s 1.078192368s 1.079499233s 1.079540169s 1.085914087s 1.086455823s 1.086684866s 1.090157453s 1.091390536s 1.094975992s 1.102756096s 1.103060374s 1.109655653s 1.111553602s 1.116211662s 1.119335588s 1.120907573s 1.133137893s 1.143429477s 1.145403307s 1.147331333s 1.150147865s 1.153369054s 1.160286928s 1.167037182s 1.169875987s 1.175994796s 1.178579934s 1.181463739s 1.183456923s 1.186177666s 1.187808364s 1.195736955s 1.196590171s 1.1991628s 1.203474891s 1.206281872s 1.210620314s 1.210975989s 1.21459547s 1.218438355s 1.219806926s 1.223105071s 1.23051296s 1.234837894s 1.240530085s 1.243917348s 1.244211882s 1.255312453s 1.257414984s 1.260259069s 1.277417522s 1.284906409s 1.285470113s 1.287153555s 1.297042834s 1.301772829s 1.307483094s 1.311064527s 1.318355169s 1.332032087s 1.361121046s 1.369323894s 1.369507831s 1.374833171s 1.398901298s 1.403606259s 1.407317204s 1.40984331s 1.413652221s 1.423226578s 1.439966982s 1.485934272s 1.499829942s 1.507287132s 1.515823295s 1.529289877s 1.538590868s 1.565659405s 1.587990275s 1.588110009s 1.591154943s 1.597208678s]
Jan  8 22:47:10.643: INFO: 50 %ile: 996.244346ms
Jan  8 22:47:10.643: INFO: 90 %ile: 1.369507831s
Jan  8 22:47:10.643: INFO: 99 %ile: 1.591154943s
Jan  8 22:47:10.643: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:47:10.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3291" for this suite.

• [SLOW TEST:19.725 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":266,"skipped":4421,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:47:10.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8570
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  8 22:47:10.729: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  8 22:47:46.928: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8570 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 22:47:46.928: INFO: >>> kubeConfig: /root/.kube/config
I0108 22:47:46.985775       9 log.go:172] (0xc000ac6420) (0xc0016d50e0) Create stream
I0108 22:47:46.985861       9 log.go:172] (0xc000ac6420) (0xc0016d50e0) Stream added, broadcasting: 1
I0108 22:47:46.990293       9 log.go:172] (0xc000ac6420) Reply frame received for 1
I0108 22:47:46.990350       9 log.go:172] (0xc000ac6420) (0xc0016d52c0) Create stream
I0108 22:47:46.990366       9 log.go:172] (0xc000ac6420) (0xc0016d52c0) Stream added, broadcasting: 3
I0108 22:47:46.992185       9 log.go:172] (0xc000ac6420) Reply frame received for 3
I0108 22:47:46.992217       9 log.go:172] (0xc000ac6420) (0xc0012f81e0) Create stream
I0108 22:47:46.992228       9 log.go:172] (0xc000ac6420) (0xc0012f81e0) Stream added, broadcasting: 5
I0108 22:47:46.993484       9 log.go:172] (0xc000ac6420) Reply frame received for 5
I0108 22:47:47.093556       9 log.go:172] (0xc000ac6420) Data frame received for 3
I0108 22:47:47.093651       9 log.go:172] (0xc0016d52c0) (3) Data frame handling
I0108 22:47:47.093688       9 log.go:172] (0xc0016d52c0) (3) Data frame sent
I0108 22:47:47.219975       9 log.go:172] (0xc000ac6420) Data frame received for 1
I0108 22:47:47.220274       9 log.go:172] (0xc000ac6420) (0xc0016d52c0) Stream removed, broadcasting: 3
I0108 22:47:47.220500       9 log.go:172] (0xc0016d50e0) (1) Data frame handling
I0108 22:47:47.220568       9 log.go:172] (0xc0016d50e0) (1) Data frame sent
I0108 22:47:47.220587       9 log.go:172] (0xc000ac6420) (0xc0016d50e0) Stream removed, broadcasting: 1
I0108 22:47:47.220709       9 log.go:172] (0xc000ac6420) (0xc0012f81e0) Stream removed, broadcasting: 5
I0108 22:47:47.220863       9 log.go:172] (0xc000ac6420) Go away received
I0108 22:47:47.221189       9 log.go:172] (0xc000ac6420) (0xc0016d50e0) Stream removed, broadcasting: 1
I0108 22:47:47.221217       9 log.go:172] (0xc000ac6420) (0xc0016d52c0) Stream removed, broadcasting: 3
I0108 22:47:47.221237       9 log.go:172] (0xc000ac6420) (0xc0012f81e0) Stream removed, broadcasting: 5
Jan  8 22:47:47.221: INFO: Waiting for responses: map[]
Jan  8 22:47:47.225: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8570 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 22:47:47.225: INFO: >>> kubeConfig: /root/.kube/config
I0108 22:47:47.268713       9 log.go:172] (0xc000ac6bb0) (0xc0016d57c0) Create stream
I0108 22:47:47.268873       9 log.go:172] (0xc000ac6bb0) (0xc0016d57c0) Stream added, broadcasting: 1
I0108 22:47:47.289070       9 log.go:172] (0xc000ac6bb0) Reply frame received for 1
I0108 22:47:47.289224       9 log.go:172] (0xc000ac6bb0) (0xc0016d5900) Create stream
I0108 22:47:47.289242       9 log.go:172] (0xc000ac6bb0) (0xc0016d5900) Stream added, broadcasting: 3
I0108 22:47:47.290458       9 log.go:172] (0xc000ac6bb0) Reply frame received for 3
I0108 22:47:47.290484       9 log.go:172] (0xc000ac6bb0) (0xc0017acbe0) Create stream
I0108 22:47:47.290492       9 log.go:172] (0xc000ac6bb0) (0xc0017acbe0) Stream added, broadcasting: 5
I0108 22:47:47.291729       9 log.go:172] (0xc000ac6bb0) Reply frame received for 5
I0108 22:47:47.350856       9 log.go:172] (0xc000ac6bb0) Data frame received for 3
I0108 22:47:47.350919       9 log.go:172] (0xc0016d5900) (3) Data frame handling
I0108 22:47:47.350951       9 log.go:172] (0xc0016d5900) (3) Data frame sent
I0108 22:47:47.446962       9 log.go:172] (0xc000ac6bb0) Data frame received for 1
I0108 22:47:47.447123       9 log.go:172] (0xc000ac6bb0) (0xc0016d5900) Stream removed, broadcasting: 3
I0108 22:47:47.447194       9 log.go:172] (0xc0016d57c0) (1) Data frame handling
I0108 22:47:47.447216       9 log.go:172] (0xc0016d57c0) (1) Data frame sent
I0108 22:47:47.447558       9 log.go:172] (0xc000ac6bb0) (0xc0017acbe0) Stream removed, broadcasting: 5
I0108 22:47:47.447581       9 log.go:172] (0xc000ac6bb0) (0xc0016d57c0) Stream removed, broadcasting: 1
I0108 22:47:47.447597       9 log.go:172] (0xc000ac6bb0) Go away received
I0108 22:47:47.448426       9 log.go:172] (0xc000ac6bb0) (0xc0016d57c0) Stream removed, broadcasting: 1
I0108 22:47:47.448512       9 log.go:172] (0xc000ac6bb0) (0xc0016d5900) Stream removed, broadcasting: 3
I0108 22:47:47.448523       9 log.go:172] (0xc000ac6bb0) (0xc0017acbe0) Stream removed, broadcasting: 5
Jan  8 22:47:47.448: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:47:47.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8570" for this suite.

• [SLOW TEST:36.807 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4434,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:47:47.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  8 22:47:47.526: INFO: Waiting up to 5m0s for pod "pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285" in namespace "emptydir-2389" to be "success or failure"
Jan  8 22:47:47.563: INFO: Pod "pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285": Phase="Pending", Reason="", readiness=false. Elapsed: 36.505075ms
Jan  8 22:47:49.571: INFO: Pod "pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044516316s
Jan  8 22:47:51.949: INFO: Pod "pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285": Phase="Pending", Reason="", readiness=false. Elapsed: 4.422656701s
Jan  8 22:47:54.146: INFO: Pod "pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285": Phase="Pending", Reason="", readiness=false. Elapsed: 6.620121336s
Jan  8 22:47:56.156: INFO: Pod "pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285": Phase="Pending", Reason="", readiness=false. Elapsed: 8.629641339s
Jan  8 22:47:58.168: INFO: Pod "pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.641423228s
STEP: Saw pod success
Jan  8 22:47:58.168: INFO: Pod "pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285" satisfied condition "success or failure"
Jan  8 22:47:58.173: INFO: Trying to get logs from node jerma-node pod pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285 container test-container: 
STEP: delete the pod
Jan  8 22:47:58.246: INFO: Waiting for pod pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285 to disappear
Jan  8 22:47:58.267: INFO: Pod pod-c267fea9-8a7a-41af-85c8-a4c2fdd84285 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:47:58.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2389" for this suite.

• [SLOW TEST:10.957 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4434,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:47:58.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:47:58.592: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-feb3a5b8-2577-4c83-9cbf-ef70464196c5" in namespace "security-context-test-3995" to be "success or failure"
Jan  8 22:47:58.606: INFO: Pod "alpine-nnp-false-feb3a5b8-2577-4c83-9cbf-ef70464196c5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.587044ms
Jan  8 22:48:00.611: INFO: Pod "alpine-nnp-false-feb3a5b8-2577-4c83-9cbf-ef70464196c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019633478s
Jan  8 22:48:02.628: INFO: Pod "alpine-nnp-false-feb3a5b8-2577-4c83-9cbf-ef70464196c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036294275s
Jan  8 22:48:04.636: INFO: Pod "alpine-nnp-false-feb3a5b8-2577-4c83-9cbf-ef70464196c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044623811s
Jan  8 22:48:06.643: INFO: Pod "alpine-nnp-false-feb3a5b8-2577-4c83-9cbf-ef70464196c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051516827s
Jan  8 22:48:06.643: INFO: Pod "alpine-nnp-false-feb3a5b8-2577-4c83-9cbf-ef70464196c5" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:48:06.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3995" for this suite.

• [SLOW TEST:8.261 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4458,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:48:06.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  8 22:48:06.911: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3981 /api/v1/namespaces/watch-3981/configmaps/e2e-watch-test-label-changed 5775ce34-41ea-4c52-8618-d10b797e4266 908864 0 2020-01-08 22:48:06 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  8 22:48:06.912: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3981 /api/v1/namespaces/watch-3981/configmaps/e2e-watch-test-label-changed 5775ce34-41ea-4c52-8618-d10b797e4266 908865 0 2020-01-08 22:48:06 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  8 22:48:06.912: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3981 /api/v1/namespaces/watch-3981/configmaps/e2e-watch-test-label-changed 5775ce34-41ea-4c52-8618-d10b797e4266 908869 0 2020-01-08 22:48:06 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  8 22:48:17.089: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3981 /api/v1/namespaces/watch-3981/configmaps/e2e-watch-test-label-changed 5775ce34-41ea-4c52-8618-d10b797e4266 908902 0 2020-01-08 22:48:06 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  8 22:48:17.089: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3981 /api/v1/namespaces/watch-3981/configmaps/e2e-watch-test-label-changed 5775ce34-41ea-4c52-8618-d10b797e4266 908903 0 2020-01-08 22:48:06 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  8 22:48:17.089: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3981 /api/v1/namespaces/watch-3981/configmaps/e2e-watch-test-label-changed 5775ce34-41ea-4c52-8618-d10b797e4266 908904 0 2020-01-08 22:48:06 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:48:17.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3981" for this suite.

• [SLOW TEST:10.439 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":270,"skipped":4462,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:48:17.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:48:17.265: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:48:18.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9807" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":271,"skipped":4474,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:48:18.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-2aa0ceee-fbc3-4fb3-9f21-7730ba8a093f
STEP: Creating a pod to test consume secrets
Jan  8 22:48:18.827: INFO: Waiting up to 5m0s for pod "pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63" in namespace "secrets-9605" to be "success or failure"
Jan  8 22:48:18.836: INFO: Pod "pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63": Phase="Pending", Reason="", readiness=false. Elapsed: 9.366074ms
Jan  8 22:48:20.841: INFO: Pod "pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014219003s
Jan  8 22:48:22.855: INFO: Pod "pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027732645s
Jan  8 22:48:24.861: INFO: Pod "pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03442787s
Jan  8 22:48:27.503: INFO: Pod "pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.675747623s
STEP: Saw pod success
Jan  8 22:48:27.503: INFO: Pod "pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63" satisfied condition "success or failure"
Jan  8 22:48:27.509: INFO: Trying to get logs from node jerma-node pod pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63 container secret-volume-test: 
STEP: delete the pod
Jan  8 22:48:27.566: INFO: Waiting for pod pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63 to disappear
Jan  8 22:48:27.576: INFO: Pod pod-secrets-dfd73e6a-e5e7-4379-bdf0-fa7d40f24c63 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:48:27.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9605" for this suite.

• [SLOW TEST:8.861 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4478,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:48:27.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan  8 22:48:27.767: INFO: >>> kubeConfig: /root/.kube/config
Jan  8 22:48:30.389: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:48:42.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6993" for this suite.

• [SLOW TEST:14.454 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":273,"skipped":4479,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:48:42.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan  8 22:48:42.135: INFO: Waiting up to 5m0s for pod "busybox-user-65534-3ac5d19b-f8cd-4dfa-a76e-38b566435827" in namespace "security-context-test-1412" to be "success or failure"
Jan  8 22:48:42.142: INFO: Pod "busybox-user-65534-3ac5d19b-f8cd-4dfa-a76e-38b566435827": Phase="Pending", Reason="", readiness=false. Elapsed: 6.461418ms
Jan  8 22:48:44.148: INFO: Pod "busybox-user-65534-3ac5d19b-f8cd-4dfa-a76e-38b566435827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012868105s
Jan  8 22:48:46.160: INFO: Pod "busybox-user-65534-3ac5d19b-f8cd-4dfa-a76e-38b566435827": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024779613s
Jan  8 22:48:48.230: INFO: Pod "busybox-user-65534-3ac5d19b-f8cd-4dfa-a76e-38b566435827": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094565759s
Jan  8 22:48:50.260: INFO: Pod "busybox-user-65534-3ac5d19b-f8cd-4dfa-a76e-38b566435827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.124129615s
Jan  8 22:48:50.260: INFO: Pod "busybox-user-65534-3ac5d19b-f8cd-4dfa-a76e-38b566435827" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:48:50.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1412" for this suite.

• [SLOW TEST:8.222 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4498,"failed":0}
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:48:50.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Jan  8 22:48:50.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5300'
Jan  8 22:48:51.199: INFO: stderr: ""
Jan  8 22:48:51.199: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 22:48:51.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5300'
Jan  8 22:48:51.473: INFO: stderr: ""
Jan  8 22:48:51.473: INFO: stdout: "update-demo-nautilus-962xb update-demo-nautilus-gfqbq "
Jan  8 22:48:51.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-962xb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:48:51.585: INFO: stderr: ""
Jan  8 22:48:51.586: INFO: stdout: ""
Jan  8 22:48:51.586: INFO: update-demo-nautilus-962xb is created but not running
Jan  8 22:48:56.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5300'
Jan  8 22:48:57.504: INFO: stderr: ""
Jan  8 22:48:57.504: INFO: stdout: "update-demo-nautilus-962xb update-demo-nautilus-gfqbq "
Jan  8 22:48:57.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-962xb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:48:58.437: INFO: stderr: ""
Jan  8 22:48:58.437: INFO: stdout: ""
Jan  8 22:48:58.437: INFO: update-demo-nautilus-962xb is created but not running
Jan  8 22:49:03.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5300'
Jan  8 22:49:03.611: INFO: stderr: ""
Jan  8 22:49:03.611: INFO: stdout: "update-demo-nautilus-962xb update-demo-nautilus-gfqbq "
Jan  8 22:49:03.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-962xb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:49:03.766: INFO: stderr: ""
Jan  8 22:49:03.766: INFO: stdout: "true"
Jan  8 22:49:03.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-962xb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:49:03.925: INFO: stderr: ""
Jan  8 22:49:03.925: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 22:49:03.925: INFO: validating pod update-demo-nautilus-962xb
Jan  8 22:49:03.931: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 22:49:03.931: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 22:49:03.931: INFO: update-demo-nautilus-962xb is verified up and running
Jan  8 22:49:03.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gfqbq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:49:04.110: INFO: stderr: ""
Jan  8 22:49:04.110: INFO: stdout: "true"
Jan  8 22:49:04.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gfqbq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:49:04.304: INFO: stderr: ""
Jan  8 22:49:04.304: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 22:49:04.304: INFO: validating pod update-demo-nautilus-gfqbq
Jan  8 22:49:04.317: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 22:49:04.317: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 22:49:04.317: INFO: update-demo-nautilus-gfqbq is verified up and running
STEP: rolling-update to new replication controller
Jan  8 22:49:04.321: INFO: scanned /root for discovery docs: 
Jan  8 22:49:04.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5300'
Jan  8 22:49:36.718: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  8 22:49:36.718: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 22:49:36.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5300'
Jan  8 22:49:36.865: INFO: stderr: ""
Jan  8 22:49:36.865: INFO: stdout: "update-demo-kitten-cq7mt update-demo-kitten-drknm "
Jan  8 22:49:36.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cq7mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:49:36.986: INFO: stderr: ""
Jan  8 22:49:36.986: INFO: stdout: "true"
Jan  8 22:49:36.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cq7mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:49:37.100: INFO: stderr: ""
Jan  8 22:49:37.100: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  8 22:49:37.100: INFO: validating pod update-demo-kitten-cq7mt
Jan  8 22:49:37.105: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  8 22:49:37.105: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  8 22:49:37.105: INFO: update-demo-kitten-cq7mt is verified up and running
Jan  8 22:49:37.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-drknm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:49:37.202: INFO: stderr: ""
Jan  8 22:49:37.202: INFO: stdout: "true"
Jan  8 22:49:37.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-drknm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5300'
Jan  8 22:49:37.320: INFO: stderr: ""
Jan  8 22:49:37.320: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  8 22:49:37.321: INFO: validating pod update-demo-kitten-drknm
Jan  8 22:49:37.329: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  8 22:49:37.329: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  8 22:49:37.329: INFO: update-demo-kitten-drknm is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:49:37.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5300" for this suite.

• [SLOW TEST:47.068 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":275,"skipped":4498,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:49:37.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan  8 22:49:48.015: INFO: Successfully updated pod "annotationupdateb518d79a-1c6f-4a4f-b068-9ff116ec9ecb"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:49:50.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2360" for this suite.

• [SLOW TEST:12.749 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4500,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:49:50.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan  8 22:49:50.182: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:50:08.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8668" for this suite.

• [SLOW TEST:18.069 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":277,"skipped":4525,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan  8 22:50:08.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan  8 22:50:09.086: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan  8 22:50:11.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:50:13.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 22:50:15.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714120609, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan  8 22:50:18.192: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jan  8 22:50:18.239: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan  8 22:50:18.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3936" for this suite.
STEP: Destroying namespace "webhook-3936-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.429 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":278,"skipped":4529,"failed":0}
SSSSSSSJan  8 22:50:18.591: INFO: Running AfterSuite actions on all nodes
Jan  8 22:50:18.591: INFO: Running AfterSuite actions on node 1
Jan  8 22:50:18.591: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0}

Ran 278 of 4814 Specs in 6111.388 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped
PASS